Grogan, Brandon R
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Grogan, Brandon R
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Comment on S-matrix parameterizations in NN-scattering
Mulders, P. J.
1981-08-01
The parameterization of the S-matrix used for the elastic part of the NN-scattering matrix in, for example, the Virginia Polytechnic Institute ineractive nucleon-nucleon program SAID, is not general enough to parameterize any 2 by 2 submatrix of a unitary matrix.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-02-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-06-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Kokhanovsky, Alexander; Guyot, Gwennole; Jourdan, Olivier; Nousiainen, Timo
2015-04-01
Snow consists of non-spherical ice grains of various shapes and sizes, which are surrounded by air and sometimes covered by films of liquid water. Still, in many studies, homogeneous spherical snow grains have been assumed in radiative transfer calculations, due to the convenience of using Mie theory. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat scattering phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ=0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function as functions of the size parameter and the real and imaginary parts of the refractive index. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons with spheres and distorted Koch fractals. Further evaluation and validation of the proposed approach against (e.g.) bidirectional reflectance and polarization measurements for snow is planned. At any rate, it seems safe to assume that the OHC selected here
NASA Astrophysics Data System (ADS)
Fu, Qiang
1991-02-01
A radiation model has been developed to calculate the radiative fluxes and heating rates in plane parallel, vertically nonhomogeneous, multiple scattering atmospheres with an accuracy of better than 5%. This scheme is appropriate for use in climate and numerical prediction models to study the effect of cloud and radiation interactions. Parameterization of nongray gaseous absorption in vertically nonhomogeneous atmospheres has been developed based upon the correlated K-distribution method. The entire radiation spectrum is divided into 18 intervals: 6 in the solar and 12 in the infrared. By using a minimum number of quadrature points within each wavelength interval to represent the gaseous absorption and to treat overlap, we need to perform 121 spectral calculations for each vertical profile to obtain total radiative fluxes and heating rates. The treatment of gaseous absorption introduces errors less than 0.05 K/day in the heating rates below 30 km and and relative errors less than 0.5% in the fluxes. The single-scattering properties of water/ice clouds have been parameterized in terms of the effective size and liquid/ice water contents, based on Mie-scattering/ray -tracing computations with the best available size distributions. The parameterization gives an accuracy within about 1% in the solar and 5% in the infrared. By using the delta-four-stream approximation, a single algorithm has been developed for radiative transfer calculations. For vertically nonhomogeneous atmospheres, this code is numerically stable and computationally efficient. The accuracy of the algorithm is generally better than 5%, but it can produce more accurate results in the limit of no scattering. Compared with line-by-line results from clear -sky longwave calculations when all constituents were included, the errors in heating rates calculated by the new radiation model are less than 0.1 K/day in the troposphere and lower stratosphere. The errors in radiative fluxes are less than 1% both at
Parameterization of radiative processes in vertically nonhomogeneous multiple scattering atmospheres
NASA Astrophysics Data System (ADS)
Fu, Qiang
1991-05-01
A radiation model has been developed to calculate the radiative fluxes and heating rates in plane parallel, vertically nonhomogeneous, multiple scattering atmospheres with an accuracy of better than 5 percent. This scheme is appropriate for use in climate and numerical prediction models to study the effect of cloud and radiation interactions. Parameterization of nongray gaseous absorption in vertically nonhomogeneous atmospheres has been developed based upon the correlated K-distribution method. The entire radiation spectrum is divided into 18 intervals: 6 in the solar and 12 in the infrared. By using a minimum number of quadrature points within each wavelength interval to represent the gaseous absorption and to treat overlap, we need to perform 121 spectral calculations for each vertical profile to obtain total radiative fluxes and heating rates. The treatment of gaseous absorption introduces errors less than 0.05 K/day in the heating rates below 30 km and relative errors less than 0.5 percent in the fluxes. The single-scattering properties of water/ice clouds have been parameterized in terms of the effective size and liquid/ice water contents, based on Mie-scattering/ray-tracing computations with the best available size distributions. The parameterization gives an accuracy within about 1 percent in the solar and 5 percent in the infrared. By using the delta-four-stream approximation, a single algorithm has been developed for radiative transfer calculations. For vertically nonhomogeneous atmospheres, this code is numerically stable and computationally efficient. The accuracy of the algorithm is generally better than 5 percent, but it can produce more accurate results in the limit of no scattering. Compared with line-by-line results from clear-sky longwave calculations when all constituents were included, the errors in heating rates calculated by the new radiation model are less than 0.1 K/day in the troposphere and lower stratosphere. The errors in radiative
Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.
1992-01-01
Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.
NASA Astrophysics Data System (ADS)
Yang, Ping; Liou, Kuo-Nan; Bi, Lei; Liu, Chao; Yi, Bingqi; Baum, Bryan A.
2015-01-01
Presented is a review of the radiative properties of ice clouds from three perspectives: light scattering simulations, remote sensing applications, and broadband radiation parameterizations appropriate for numerical models. On the subject of light scattering simulations, several classical computational approaches are reviewed, including the conventional geometric-optics method and its improved forms, the finite-difference time domain technique, the pseudo-spectral time domain technique, the discrete dipole approximation method, and the T-matrix method, with specific applications to the computation of the single-scattering properties of individual ice crystals. The strengths and weaknesses associated with each approach are discussed. With reference to remote sensing, operational retrieval algorithms are reviewed for retrieving cloud optical depth and effective particle size based on solar or thermal infrared (IR) bands. To illustrate the performance of the current solar- and IR-based retrievals, two case studies are presented based on spaceborne observations. The need for a more realistic ice cloud optical model to obtain spectrally consistent retrievals is demonstrated. Furthermore, to complement ice cloud property studies based on passive radiometric measurements, the advantage of incorporating lidar and/or polarimetric measurements is discussed. The performance of ice cloud models based on the use of different ice habits to represent ice particles is illustrated by comparing model results with satellite observations. A summary is provided of a number of parameterization schemes for ice cloud radiative properties that were developed for application to broadband radiative transfer submodels within general circulation models (GCMs). The availability of the single-scattering properties of complex ice habits has led to more accurate radiation parameterizations. In conclusion, the importance of using nonspherical ice particle models in GCM simulations for climate
NASA Astrophysics Data System (ADS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; Jimenez, Jose L.; Kondo, Yutaka; Sahu, Lokesh K.; Dibb, Jack E.; Wang, Chien
2016-07-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction
Laser scattering measurement for laser removal of graffiti
NASA Astrophysics Data System (ADS)
Tearasongsawat, Watcharawee; Kittiboonanan, Phumipat; Luengviriya, Chaiya; Ratanavis, Amarin
2015-07-01
In this contribution, a technical development of the laser scattering measurement for laser removal of graffiti is reported. This study concentrates on the removal of graffiti from metal surfaces. Four colored graffiti paints were applied to stainless steel samples. Cleaning efficiency was evaluated by the laser scattering system. In this study, an angular laser removal of graffiti was attempted to examine the removal process under practical conditions. A Q-switched Nd:YAG laser operating at 1.06 microns with the repetition rate of 1 Hz was used to remove graffiti from stainless steel samples. The laser fluence was investigated from 0.1 J/cm2 to 7 J/cm2. The laser parameters to achieve the removal effectiveness were determined by using the laser scattering system. This study strongly leads to further development of the potential online surface inspection for the removal of graffiti.
A modified Fresnel scattering model for the parameterization of Fresnel returns, part 2.3A
NASA Technical Reports Server (NTRS)
Gage, K. S.; Ecklund, W. L.; Balsley, B. B.
1984-01-01
A modified Fresnel scatter model is presented and the revised model is compared with observations from the Poker Flat, Alaska, radar, the SOUSY radar and the Jimcamarca radar. The modifications to the original model have been made to better account for the pulse width dependence and height dependence of backscattered power observed at vertical incidence at lower VHF. Vertical profiles of backscattered power calculated using the revised model and routine radiosonde data show good agreement with observed backscattered power profiles. Relative comparisons of backscattered power using climatological data for the model agree fairly well with observed backscattered power profiles from Poker Flat, Jicamarca, and SOUSY.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
Scattering removal for finger-vein image restoration.
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
NASA Astrophysics Data System (ADS)
Pokhrel, Rudra P.; Wagner, Nick L.; Langridge, Justin M.; Lack, Daniel A.; Jayarathne, Thilina; Stone, Elizabeth A.; Stockwell, Chelsea E.; Yokelson, Robert J.; Murphy, Shane M.
2016-08-01
Single-scattering albedo (SSA) and absorption Ångström exponent (AAE) are two critical parameters in determining the impact of absorbing aerosol on the Earth's radiative balance. Aerosol emitted by biomass burning represent a significant fraction of absorbing aerosol globally, but it remains difficult to accurately predict SSA and AAE for biomass burning aerosol. Black carbon (BC), brown carbon (BrC), and non-absorbing coatings all make substantial contributions to the absorption coefficient of biomass burning aerosol. SSA and AAE cannot be directly predicted based on fuel type because they depend strongly on burn conditions. It has been suggested that SSA can be effectively parameterized via the modified combustion efficiency (MCE) of a biomass burning event and that this would be useful because emission factors for CO and CO2, from which MCE can be calculated, are available for a large number of fuels. Here we demonstrate, with data from the FLAME-4 experiment, that for a wide variety of globally relevant biomass fuels, over a range of combustion conditions, parameterizations of SSA and AAE based on the elemental carbon (EC) to organic carbon (OC) mass ratio are quantitatively superior to parameterizations based on MCE. We show that the EC / OC ratio and the ratio of EC / (EC + OC) both have significantly better correlations with SSA than MCE. Furthermore, the relationship of EC / (EC + OC) with SSA is linear. These improved parameterizations are significant because, similar to MCE, emission factors for EC (or black carbon) and OC are available for a wide range of biomass fuels. Fitting SSA with MCE yields correlation coefficients (Pearson's r) of ˜ 0.65 at the visible wavelengths of 405, 532, and 660 nm while fitting SSA with EC / OC or EC / (EC + OC) yields a Pearson's r of 0.94-0.97 at these same wavelengths. The strong correlation coefficient at 405 nm (r = 0.97) suggests that parameterizations based on EC / OC or EC / (EC + OC) have good predictive
NASA Astrophysics Data System (ADS)
Xu, Hai-Bo; Zheng, Na
2015-07-01
A version of Geant4 has been developed to treat high-energy proton radiography. This article presents the results of calculations simulating the effects of nuclear elastic scattering for various test step wedges. Comparisons with experimental data are also presented. The traditional expressions of the transmission should be correct if the angle distribution of the scattering is Gaussian multiple Coulomb scattering. The mean free path (which depends on the collimator angle) and the radiation length are treated as empirical parameters, according to transmission as a function of thickness obtained by simulations. The results can be used in density reconstruction, which depends on the transmission expressions. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)
Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.
Wang, Bo; Yang, Xiaolan
2015-04-01
The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study. PMID:25625425
NASA Astrophysics Data System (ADS)
Ryu, Y.; Kobayashi, H.; Welles, J.; Norman, J.
2011-12-01
Correct estimation of gap fraction is essential to quantify canopy architectural variables such as leaf area index and clumping index, which mainly control land-atmosphere interactions. However, gap fraction measurements from optical sensors are contaminated by scattered radiation by canopy and ground surface. In this study, we propose a simple invertible bidirectional transmission model to remove scattering effects from gap fraction measurements. The model shows that 1) scattering factor appears highest where leaf area index is 1-2 in non-clumped canopy, 2) relative scattering factor (scattering factor/measured gap fraction) increases with leaf area index, 3) bright land surface (e.g. snow and bright soil) can contribute a significant scattering factor, 4) the scattering factor is not marginal even in highly diffused sky condition. By incorporating the model with LAI2200 data collected in an open savanna ecosystem, we find that the scattering factor causes significant underestimation of leaf area index (25%) and significant overestimation of clumping index (6 %). The results highlight that some LAI-2000-based LAI estimates from around the world may be underestimated, particularly in highly clumped broad-leaf canopies. Fortunately, the importance of scattering could be assessed with software from LICOR, Inc., which will incorporate the scattering model from this study in a post processing mode after data has been collected by a LAI-2000 or LAI-2200.
Radiation properties and emissivity parameterization of high level thin clouds
NASA Technical Reports Server (NTRS)
Wu, M.-L. C.
1984-01-01
To parameterize emissivity of clouds at 11 microns, a study has been made in an effort to understand the radiation field of thin clouds. The contributions to the intensity and flux from different sources and through different physical processes are calculated by using the method of successive orders of scattering. The effective emissivity of thin clouds is decomposed into the effective absorption emissivity, effective scattering emissivity, and effective reflection emissivity. The effective absorption emissivity depends on the absorption and emission of the cloud; it is parameterized in terms of optical thickness. The effective scattering emissivity depends on the scattering properties of the cloud; it is parameterized in terms of optical thickness and single scattering albedo. The effective reflection emissivity follows the similarity relation as in the near infrared cases. This is parameterized in terms of the similarity parameter and optical thickness, as well as the temperature difference between the cloud and ground.
NASA Astrophysics Data System (ADS)
Rana, R.; Jain, A.; Shankar, A.; Bednarek, D. R.; Rudin, S.
2016-03-01
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 x 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, antiscatter grid artifacts can be corrected, even during dynamic sequences.
Andresen, Kurt; Jimenez-Useche, Isabel; Howell, Steven C; Yuan, Chongli; Qiu, Xiangyun
2013-01-01
Using a combination of small-angle X-ray scattering (SAXS) and fluorescence resonance energy transfer (FRET) measurements we have determined the role of the H3 and H4 histone tails, independently, in stabilizing the nucleosome DNA terminal ends from unwrapping from the nucleosome core. We have performed solution scattering experiments on recombinant wild-type, H3 and H4 tail-removed mutants and fit all scattering data with predictions from PDB models and compared these experiments to complementary DNA-end FRET experiments. Based on these combined SAXS and FRET studies, we find that while all nucleosomes exhibited DNA unwrapping, the extent of this unwrapping is increased for nucleosomes with the H3 tails removed but, surprisingly, decreased in nucleosomes with the H4 tails removed. Studies of salt concentration effects show a minimum amount of DNA unwrapping for all complexes around 50-100mM of monovalent ions. These data exhibit opposite roles for the positively-charged nucleosome tails, with the ability to decrease access (in the case of the H3 histone) or increase access (in the case of the H4 histone) to the DNA surrounding the nucleosome. In the range of salt concentrations studied (0-200mM KCl), the data point to the H4 tail-removed mutant at physiological (50-100mM) monovalent salt concentration as the mononucleosome with the least amount of DNA unwrapping. PMID:24265699
Andresen, Kurt; Jimenez-Useche, Isabel; Howell, Steven C.; Yuan, Chongli; Qiu, Xiangyun
2013-01-01
Using a combination of small-angle X-ray scattering (SAXS) and fluorescence resonance energy transfer (FRET) measurements we have determined the role of the H3 and H4 histone tails, independently, in stabilizing the nucleosome DNA terminal ends from unwrapping from the nucleosome core. We have performed solution scattering experiments on recombinant wild-type, H3 and H4 tail-removed mutants and fit all scattering data with predictions from PDB models and compared these experiments to complementary DNA-end FRET experiments. Based on these combined SAXS and FRET studies, we find that while all nucleosomes exhibited DNA unwrapping, the extent of this unwrapping is increased for nucleosomes with the H3 tails removed but, surprisingly, decreased in nucleosomes with the H4 tails removed. Studies of salt concentration effects show a minimum amount of DNA unwrapping for all complexes around 50-100mM of monovalent ions. These data exhibit opposite roles for the positively-charged nucleosome tails, with the ability to decrease access (in the case of the H3 histone) or increase access (in the case of the H4 histone) to the DNA surrounding the nucleosome. In the range of salt concentrations studied (0-200mM KCl), the data point to the H4 tail-removed mutant at physiological (50-100mM) monovalent salt concentration as the mononucleosome with the least amount of DNA unwrapping. PMID:24265699
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Yoon, Y; Park, M; Kim, H; Kim, K; Kim, J; Morishita, J
2015-06-15
Purpose: This study aims to identify the feasibility of a novel cesium-iodine (CsI)-based flat-panel detector (FPD) for removing scatter radiation in diagnostic radiology. Methods: The indirect FPD comprises three layers: a substrate, scintillation, and thin-film-transistor (TFT) layer. The TFT layer has a matrix structure with pixels. There are ineffective dimensions on the TFT layer, such as the voltage and data lines; therefore, we devised a new FPD system having net-like lead in the substrate layer, matching the ineffective area, to block the scatter radiation so that only primary X-rays could reach the effective dimension.To evaluate the performance of this new FPD system, we conducted a Monte Carlo simulation using MCNPX 2.6.0 software. Scatter fractions (SFs) were acquired using no grid, a parallel grid (8:1 grid ratio), and the new system, and the performances were compared.Two systems having different thicknesses of lead in the substrate layer—10 and 20μm—were simulated. Additionally, we examined the effects of different pixel sizes (153×153 and 163×163μm) on the image quality, while keeping the effective area of pixels constant (143×143μm). Results: In case of 10μm lead, the SFs of the new system (∼11%) were lower than those of the other system (∼27% with no grid, ∼16% with parallel grid) at 40kV. However, as the tube voltage increased, the SF of new system (∼19%) was higher than that of parallel grid (∼18%) at 120kV. In the case of 20μm lead, the SFs of the new system were lower than those of the other systems at all ranges of the tube voltage (40–120kV). Conclusion: The novel CsI-based FPD system for removing scatter radiation is feasible for improving the image contrast but must be optimized with respect to the lead thickness, considering the system’s purposes and the ranges of the tube voltage in diagnostic radiology. This study was supported by a grant(K1422651) from Institute of Health Science, Korea University.
[Characteristics and Parameterization for Atmospheric Extinction Coefficient in Beijing].
Chen, Yi-na; Zhao, Pu-sheng; He, Di; Dong, Fan; Zhao, Xiu-juan; Zhang, Xiao-ling
2015-10-01
In order to study the characteristics of atmospheric extinction coefficient in Beijing, systematic measurements had been carried out for atmospheric visibility, PM2.5 concentration, scattering coefficient, black carbon, reactive gases, and meteorological parameters from 2013 to 2014. Based on these data, we compared some published fitting schemes of aerosol light scattering enhancement factor [ f(RH)], and discussed the characteristics and the key influence factors for atmospheric extinction coefficient. Then a set of parameterization models of atmospheric extinction coefficient for different seasons and different polluted levels had been established. The results showed that aerosol scattering accounted for more than 94% of total light extinction. In the summer and autumn, the aerosol hygroscopic growth caused by high relative humidity had increased the aerosol scattering coefficient by 70 to 80 percent. The parameterization models could reflect the influencing mechanism of aerosol and relative humidity upon ambient light extinction, and describe the seasonal variations of aerosol light extinction ability. PMID:26841588
Parameterization of sub-grid scale convection
NASA Technical Reports Server (NTRS)
Frank, William; Molinari, John; Kain, Jack; Moncrieff, Mitch; Karyampudi, Mohan; Grell, Georg
1993-01-01
The following topics are discussed: an overview of the cumulus parameterization problem; interactions between explicit and implicit processes in mesoscale models; effects of model grid size on the cumulus parameterization problem; parameterizing convective effects on momentum fields in mesoscale models; differences between slantwise and vertical cumulus parameterization; experiments with different closure hypotheses; and coupling cumulus parameterizations to boundary layer, stable cloud, and radiation schemes.
Parameterization of solar cells
NASA Astrophysics Data System (ADS)
Appelbaum, J.; Chait, A.; Thompson, D.
1992-10-01
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. An alternative approach for cell performance prediction and cell screening is provided by modeling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters, or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening of solar cells for arrays. We show that common curve fitting techniques, e.g., least squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve fitting criteria alone cannot be directly used for cell parameterization. We propose a consistent procedure which takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. The procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom.
Parameterization of solar cells
NASA Technical Reports Server (NTRS)
Appelbaum, J.; Chait, A.; Thompson, D.
1992-01-01
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. An alternative approach for cell performance prediction and cell screening is provided by modeling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters, or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening of solar cells for arrays. We show that common curve fitting techniques, e.g., least squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve fitting criteria alone cannot be directly used for cell parameterization. We propose a consistent procedure which takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. The procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
Assessment of Mixed Layer Mesoscale Parameterization in Eddy Resolving Simulations.
NASA Astrophysics Data System (ADS)
Clayson, C. A.; Luneva, M. V.; Dubovikov, M. S.
2014-12-01
In eddy resolving simulations we test a mixed layer mesoscale parameterization, developed recently by Canuto and Dubovikov (2011). The parameterization yields the horizontal and vertical mesoscale fluxes in terms of coarse-resolution fields and eddy kinetic energy. An expression for the later in terms of mean fields has been found too to get a closed parameterization in terms of the mean fields only. In 40 numerical experiments we simulated the two types of flows: idealized flows driven by baroclinic instabilities only, and more realistic flows, driven by wind and surface fluxes as well as by inflow-outflow in shallow and narrow straits. The diagnosed quasi-instantaneous horizontal and vertical mesoscale buoyancy fluxes (averaged over 1o - 2o and 10 days) demonstrate a strong scatter typical for turbulent flows, however, the fluxes are highly correlated with the parameterization. After averaged over 3-4 months, diffusivities diagnosed from the eddy resolving simulations, are quite consistent with the parameterization for a broad range of parameters. Diagnosed vertical mesoscale fluxes restratify mixed layer and are in a good agreement with the parameterization unless vertical turbulent mixing in the upper layer becomes strong enough to compare with mesoscale advection. In the later case, numerical simulations demonstrate that the deviation of the fluxes from the parameterization is controlled by the dimensionless parameter γ, estimating the ratio of vertical diffusion term to a mesoscale advection. The empirical dependence of vertical flux on γ is found. An analysis using a modified omega-equation reveals that the effects of the vertical mixing of vorticity is responsible for the two-three fold amplification of vertical mesoscale flux. Possible physical mechanisms, responsible for the amplification of vertical mesoscale flux are discussed.
Summary of Cumulus Parameterization Workshop
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh
2002-01-01
A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.
ARM Data for Cloud Parameterization
Xu, Kuan-Man
2006-10-02
The PI's ARM investigation (DE-IA02-02ER633 18) developed a physically-based subgrid-scale saturation representation that fully considers the direct interactions of the parameterized subgrid-scale motions with subgrid-scale cloud microphysical and radiative processes. Major accomplishments under the support of that interagency agreement are summarized in this paper.
NASA Astrophysics Data System (ADS)
Su, Jing-Wei; Hsu, Wei-Chen; Tjiu, Jeng-Wei; Chiang, Chun-Pin; Huang, Chao-Wei; Sung, Kung-Bin
2014-07-01
The scattering properties and refractive indices (RI) of tissue are important parameters in tissue optics. These parameters can be determined from quantitative phase images of thin slices of tissue blocks. However, the changes in RI and structure of cells due to fixation and paraffin embedding might result in inaccuracies in the estimation of the scattering properties of tissue. In this study, three-dimensional RI distributions of cells were measured using digital holographic microtomography to obtain total scattering cross sections (TSCS) of the cells based on the first-order Born approximation. We investigated the slight loss of dry mass and drastic shrinkage of cells due to paraformaldehyde fixation and paraffin embedding removal processes. We propose a method to compensate for the correlated changes in volume and RI of cells. The results demonstrate that the TSCS of live cells can be estimated using restored cells. The percentage deviation of the TSCS between restored cells and live cells was only -8%. Spatially resolved RI and scattering coefficients of unprocessed oral epithelium ranged from 1.35 to 1.39 and from 100 to 450 cm-1, respectively, estimated from paraffin-embedded oral epithelial tissue after restoration of RI and volume.
Su, Jing-Wei; Hsu, Wei-Chen; Tjiu, Jeng-Wei; Chiang, Chun-Pin; Huang, Chao-Wei; Sung, Kung-Bin
2014-01-01
The scattering properties and refractive indices (RI) of tissue are important parameters in tissue optics. These parameters can be determined from quantitative phase images of thin slices of tissue blocks. However, the changes in RI and structure of cells due to fixation and paraffin embedding might result in inaccuracies in the estimation of the scattering properties of tissue. In this study, three-dimensional RI distributions of cells were measured using digital holographic microtomography to obtain total scattering cross sections (TSCS) of the cells based on the first-order Born approximation. We investigated the slight loss of dry mass and drastic shrinkage of cells due to paraformaldehyde fixation and paraffin embedding removal processes. We propose a method to compensate for the correlated changes in volume and RI of cells. The results demonstrate that the TSCS of live cells can be estimated using restored cells. The percentage deviation of the TSCS between restored cells and live cells was only −8%. Spatially resolved RI and scattering coefficients of unprocessed oral epithelium ranged from 1.35 to 1.39 and from 100 to 450 cm−1, respectively, estimated from paraffinembedded oral epithelial tissue after restoration of RI and volume. PMID:25069007
Parameterized Beyond-Einstein Growth
Linder, Eric; Linder, Eric V.; Cahn, Robert N.
2007-09-17
A single parameter, the gravitational growth index gamma, succeeds in characterizing the growth of density perturbations in the linear regime separately from the effects of the cosmic expansion. The parameter is restricted to a very narrow range for models of dark energy obeying the laws of general relativity but can take on distinctly different values in models of beyond-Einstein gravity. Motivated by the parameterized post-Newtonian (PPN) formalism for testing gravity, we analytically derive and extend the gravitational growth index, or Minimal Modified Gravity, approach to parameterizing beyond-Einstein cosmology. The analytic formalism demonstrates how to apply the growth index parameter to early dark energy, time-varying gravity, DGP braneworld gravity, and some scalar-tensor gravity.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Recursive Abstractions for Parameterized Systems
NASA Astrophysics Data System (ADS)
Jaffar, Joxan; Santosa, Andrew E.
We consider a language of recursively defined formulas about arrays of variables, suitable for specifying safety properties of parameterized systems. We then present an abstract interpretation framework which translates a paramerized system as a symbolic transition system which propagates such formulas as abstractions of underlying concrete states. The main contribution is a proof method for implications between the formulas, which then provides for an implementation of this abstract interpreter.
Fu, Q.; Sun, W.B.; Yang, P.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.
1989-01-01
The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
Quantum Consequences of Parameterizing Geometry
NASA Astrophysics Data System (ADS)
Wanas, M. I.
2002-12-01
The marriage between geometrization and quantization is not successful, so far. It is well known that quantization of gravity , using known quantization schemes, is not satisfactory. It may be of interest to look for another approach to this problem. Recently, it is shown that geometries with torsion admit quantum paths. Such geometries should be parameterizied in order to preserve the quantum properties appeared in the paths. The present work explores the consequences of parameterizing such geometry. It is shown that quantum properties, appeared in the path equations, are transferred to other geometric entities.
Infrared radiation parameterizations in numerical climate models
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Kratz, David P.; Ridgway, William
1991-01-01
This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.
Independent component analysis of parameterized ECG signals.
Tanskanen, Jarno M A; Viik, Jari J; Hyttinen, Jari A K
2006-01-01
Independent component analysis (ICA) of measured signals yields the independent sources, given certain fulfilled requirements. Properly parameterized signals provide a better view to the considered system aspects, while reducing the amount of data. It is little acknowledged that appropriately parameterized signals may be subjected to ICA, yielding independent components (ICs) displaying more clearly the investigated properties of the sources. In this paper, we propose ICA of parameterized signals, and demonstrate the concept with ICA of ST and R parameterizations of electrocardiogram (ECG) signals from ECG exercise test measurements from two coronary artery disease (CAD) patients. PMID:17945912
Parameterization of precipitating shallow convection
NASA Astrophysics Data System (ADS)
Seifert, Axel
2015-04-01
Shallow convective clouds play a decisive role in many regimes of the atmosphere. They are abundant in the trade wind regions and essential for the radiation budget in the sub-tropics. They are also an integral part of the diurnal cycle of convection over land leading to the formation of deeper modes of convection later on. Errors in the representation of these small and seemingly unimportant clouds can lead to misforecasts in many situations. Especially for high-resolution NWP models at 1-3 km grid spacing which explicitly simulate deeper modes of convection, the parameterization of the sub-grid shallow convection is an important issue. Large-eddy simulations (LES) can provide the data to study shallow convective clouds and their interaction with the boundary layer in great detail. In contrast to observation, simulations provide a complete and consistent dataset, which may not be perfectly realistic due to the necessary simplifications, but nevertheless enables us to study many aspects of those clouds in a self-consistent way. Today's supercomputing capabilities make it possible to use domain sizes that not only span several NWP grid boxes, but also allow for mesoscale self-organization of the cloud field, which is an essential behavior of precipitating shallow convection. By coarse-graining the LES data to the grid of an NWP model, the sub-grid fluctuations caused by shallow convective clouds can be analyzed explicitly. These fluctuations can then be parameterized in terms of a PDF-based closure. The necessary choices for such schemes like the shape of the PDF, the number of predicted moments, etc., will be discussed. For example, it is shown that a universal three-parameter distribution of total water may exist at scales of O(1 km) but not at O(10 km). In a next step the variance budgets of moisture and temperature in the cloud-topped boundary layer are studied. What is the role and magnitude of the microphysical correlation terms in these equations, which
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
Automated Classification and Stellar Parameterization .
NASA Astrophysics Data System (ADS)
Giridhar, S.; Muneer, S.; Goswami, A.
2006-08-01
Different approaches for automated spectral classification are critically reviewed. We also summarize ANN based methods which would be very efficient in quick handling of the large volumes of data generated by different surveys. We have obtained medium resolution spectra for a large sample of stars using 2.3m telescope at VBO, Kavalur, India. Our sample contains uniform distribution of stars in temperature range 4000 to 8000K, log g range of 2.0 to 5.0 and [Fe/H] range of 0 to -3. We have explored the application of artificial neural network for parameterization of these stars. We have used a set of stars with well determined atmospheric parameters for training the networks for temperature, gravity and metallicity estimations. We use these trained network to estimate metallicities for a sample of metal-poor candidate stars.
Parameterization of solar flare dose
Lamarche, A.H.; Poston, J.W.
1996-12-31
A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP).
Visibility Parameterization For Forecasting Model Applications
NASA Astrophysics Data System (ADS)
Gultepe, I.; Milbrandt, J.; Binbin, Z.
2010-07-01
In this study, the visibility parameterizations developed during Fog Remote Sensing And Modeling (FRAM) projects, conducted in central and eastern Canada, will be summarized and their use for forecasting/nowcasting applications will be discussed. Parameterizations developed for reductions in visibility due to 1) fog, 2) rain, 3) snow, and 4) relative humidity (RH) during FRAM will be given and uncertainties in the parameterizations will be discussed. Comparisons made between Canadian GEM NWP model (with 1 and 2.5 km horizontal grid spacing) and observations collected during the Science of Nowcasting Winter Weather for Vancouver 2010 (SNOW-V10) project and FRAM projects, using the new parameterizations, will be given Observations used in this study were obtained using a fog measuring device (FMD) for fog parameterization, a Vaisala all weather precipitation sensor called FD12P for rain and snow parameterizations and visibility measurements, and a total precipitation sensor (TPS), and distrometers called OTT ParSiVel and Laser Precipitation Measurement (LPM) for rain/snow particle spectra. The results from the three SNOW-V10 sites suggested that visibility values given by the GEM model using the new parameterizations were comparable with observed visibility values when model based input parameters such as liquid water content, RH, and precipitation rate for visibility parameterizations were predicted accurately.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-12-31
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-01-01
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
Automated classification and stellar parameterization
NASA Astrophysics Data System (ADS)
Giridhar, Sunetra; Muneer, S.; Goswami, Aruna
Different approaches for automated spectral classification are critically reviewed. We describe in detail ANN based methods which are very efficient in quick handling of the large volumes of data generated by different surveys. We summarize the application of ANN in various surveys covering UV, visual and IR spectral regions and the accuracies obtained. We also present the preliminary results obtained with medium resolution spectra (R ˜ 1000) for a modest sample of stars using the 2.3 m Vainu Bappu Telescope at Kavalur observatory, India. Our sample contains uniform distribution of stars in temperature range 4500 to 8000 K, log g range of 1.5 to 5.0 and [Fe/H] range of 0 to -3. We have explored the application of artificial neural network for parameterization of these stars. We have used a set of stars with well determined atmospheric parameters for training the networks for temperature, gravity and metallicity estimations. We could get an accuracy of 200 K in temperature, 0.4 in log g and 0.3 dex in [Fe/H] in our preliminary efforts.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
A Two-Habit Ice Cloud Optical Property Parameterization for GCM Application
NASA Technical Reports Server (NTRS)
Yi, Bingqi; Yang, Ping; Minnis, Patrick; Loeb, Norman; Kato, Seiji
2014-01-01
We present a novel ice cloud optical property parameterization based on a two-habit ice cloud model that has been proved to be optimal for remote sensing applications. The two-habit ice model is developed with state-of-the-art numerical methods for light scattering property calculations involving individual columns and column aggregates with the habit fractions constrained by in-situ measurements from various field campaigns. Band-averaged bulk ice cloud optical properties including the single-scattering albedo, the mass extinction/absorption coefficients, and the asymmetry factor are parameterized as functions of the effective particle diameter for the spectral bands involved in the broadband radiative transfer models. Compared with other parameterization schemes, the two-habit scheme generally has lower asymmetry factor values (around 0.75 at the visible wavelengths). The two-habit parameterization scheme was widely tested with the broadband radiative transfer models (i.e. Rapid Radiative Transfer Model, GCM version) and global circulation models (GCMs, i.e. Community Atmosphere Model, version 5). Global ice cloud radiative effects at the top of the atmosphere are also analyzed from the GCM simulation using the two-habit parameterization scheme in comparison with CERES satellite observations.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
Parameterization of the three-dimensional room transfer function in horizontal plane.
Bu, Bing; Abhayapala, Thushara D; Bao, Chang-chun; Zhang, Wen
2015-09-01
This letter proposes an efficient parameterization of the three-dimensional room transfer function (RTF) which is robust for the position variations of source and receiver in respective horizontal planes. Based on azimuth harmonic analysis, the proposed method exploits the underlying properties of the associated Legendre functions to remove a portion of the spherical harmonic coefficients of RTF which have no contribution in the horizontal plane. This reduction leads to a flexible measuring-point structure consisting of practical concentric circular arrays to extract horizontal plane RTF coefficients. The accuracy of the above parameterization is verified through numerical simulations. PMID:26428827
POET: Parameterized Optimization for Empirical Tuning
Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D
2007-01-29
The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.
Optical closure of parameterized bio-optical relationships
NASA Astrophysics Data System (ADS)
He, Shuangyan; Fischer, Jürgen; Schaale, Michael; He, Ming-xia
2014-03-01
An optical closure study on bio-optical relationships was carried out using radiative transfer model matrix operator method developed by Freie Universität Berlin. As a case study, the optical closure of bio-optical relationships empirically parameterized with in situ data for the East China Sea was examined. Remote-sensing reflectance ( R rs) was computed from the inherent optical properties predicted by these biooptical relationships and compared with published in situ data. It was found that the simulated R rs was overestimated for turbid water. To achieve optical closure, bio-optical relationships for absorption and scattering coefficients for suspended particulate matter were adjusted. Furthermore, the results show that the Fournier and Forand phase functions obtained from the adjusted relationships perform better than the Petzold phase function. Therefore, before bio-optical relationships are used for a local sea area, the optical closure should be examined.
Parameterization of lattice spacings for lipid multilayers in ionic solutions
NASA Astrophysics Data System (ADS)
Petrache, Horia; Johnson, Merrell; Harries, Daniel; Seifert, Soenke
Lipids, which are molecules found in biological cells, form highly regular layered structures called multilamellar lipid vesicles (MLVs). The repeat lattice spacings of MLVs depend on van der Waals and electrostatic forces between neighboring membranes and are sensitive to the presence of salt. For example, addition of salt ions such as sodium and potassium makes the MLVs swell, primarily due to changes in electrical polarizabilities. However, a more complicated behavior is found in some ionic solutions such as those containing lithium ions. Using x-ray scattering, we show experimentally how the interactions between membranes depend on the type of monovalent ions and construct parameterizations of MLVs swelling curves that can help analyze van der Waals interactions.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode
A uniform parameterization of moment tensors
NASA Astrophysics Data System (ADS)
Tape, C.; Tape, W.
2015-12-01
A moment tensor is a 3 x 3 symmetric matrix that expresses an earthquake source. We construct a parameterization of the five-dimensional space of all moment tensors of unit norm. The coordinates associated with the parameterization are closely related to moment tensor orientations and source types. The parameterization is uniform, in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favor double couples. An appropriate choice of a priori moment tensor probability is a prerequisite for parameter estimation. As a seemingly sensible choice, we consider the homogeneous probability, in which equal volumes of moment tensors are equally likely. We believe that it will lead to improved characterization of source processes.
Soil processes parameterization in meteorological model.
NASA Astrophysics Data System (ADS)
Mazur, Andrzej; Duniec, Grzegorz
2014-05-01
In August 2012 Polish Institute Meteorology and Water Management - National Research Institute (IMWM-NRI) started a collaboration with the Institute of Agrophysics - Polish Academy of Science (IA-PAS) in order to improve soil processes parameterization in COSMO meteorological model of high resolution (horizontal grid size equal to 2,8 km). This cooperation turned into a project named "New approach to parameterization of physical processes in soil in numerical model". The new set of soil processes parameterizations is being developed considering many physical and microphysical processes in soil. Currently, main effort is focused on description of bare soil evaporation, soil water transport and the runoff from soil layers. The preliminary results from new mathematical formulation of bare soil evaporation implemented in COSMO model will be presented. Moreover, during the Conference authors (realizing a constant need for further improvement) would like to show future plans and topics for further studies. It is planned to combine the mentioned new approach with TILE and MOSAIC parameterizations, previously investigated as a part of TERRA-MultiLevel module of COSMO model, and to use measurements data received from IA-PAS and from Satellite Remote Sensing Center in soil-related COSMO model numerical experiments.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H., Jr.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
Control of shortwave radiation parameterization on tropical climate SST-forced simulation
NASA Astrophysics Data System (ADS)
Crétat, Julien; Masson, Sébastien; Berthet, Sarah; Samson, Guillaume; Terray, Pascal; Dudhia, Jimy; Pinsard, Françoise; Hourdin, Christophe
2016-01-01
SST-forced tropical-channel simulations are used to quantify the control of shortwave (SW) parameterization on the mean tropical climate compared to other major model settings (convection, boundary layer turbulence, vertical and horizontal resolutions), and to pinpoint the physical mechanisms whereby this control manifests. Analyses focus on the spatial distribution and magnitude of the net SW radiation budget at the surface (SWnet_SFC), latent heat fluxes, and rainfall at the annual timescale. The model skill and sensitivity to the tested settings are quantified relative to observations and using an ensemble approach. Persistent biases include overestimated SWnet_SFC and too intense hydrological cycle. However, model skill is mainly controlled by SW parameterization, especially the magnitude of SWnet_SFC and rainfall and both the spatial distribution and magnitude of latent heat fluxes over ocean. On the other hand, the spatial distribution of continental rainfall (SWnet_SFC) is mainly influenced by convection parameterization and horizontal resolution (boundary layer parameterization and orography). Physical understanding of the control of SW parameterization is addressed by analyzing the thermal structure of the atmosphere and conducting sensitivity experiments to O3 absorption and SW scattering coefficient. SW parameterization shapes the stability of the atmosphere in two different ways according to whether surface is coupled to atmosphere or not, while O3 absorption has minor effects in our simulations. Over SST-prescribed regions, increasing the amount of SW absorption warms the atmosphere only because surface temperatures are fixed, resulting in increased atmospheric stability. Over land-atmosphere coupled regions, increasing SW absorption warms both atmospheric and surface temperatures, leading to a shift towards a warmer state and a more intense hydrological cycle. This turns in reversal model behavior between land and sea points, with the SW scheme that
Hu, Y.X.; Stamnes, K. )
1993-04-01
A new parameterization of the radiative Properties of water clouds is presented. Cloud optical properties for valent radius throughout the solar and both solar and terrestrial spectra and for cloud equivalent radii in the range 2.5-60 [mu]m are calculated from Mie theory. It is found that cloud optical properties depend mainly on equivalent radius throughout the solar and terrestrial spectrum and are insensitive to the details of the droplet size distribution, such as shape, skewness, width, and modality (single or bimodal). This suggests that in cloud models, aimed at predicting the evolution of cloud microphysics with climate change, it is sufficient to determine the third and the second moments of the size distribution (the ratio of which determines the equivalent radius). It also implies that measurements of the cloud liquid water content and the extinction coefficient are sufficient to determine cloud optical properties experimentally (i.e., measuring the complete droplet size distribution is not required). Based on the detailed calculations, the optical properties are parameterized as a function of cloud liquid water path and equivalent cloud droplet radius by using a nonlinear least-square fitting. The parameterization is performed separately for the range of radii 2.5-12 [mu]m, 12-30,[mu]m, and 30-60 [mu]m. Cloud heating and cooling rates are computed from this parameterization by using a comprehensive radiation model. Comparison with similar results obtained from exact Mie scattering calculations shows that this parameterization yields very accurate results and that it is several thousand times faster. This parameterization separates the dependence of cloud optical properties on droplet size and liquid water content, and is suitable for inclusion into climate models. 22 refs., 7 figs., 6 tabs.
Parameterization of contrail radiative properties for climate studies
NASA Astrophysics Data System (ADS)
Xie, Yu; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Duda, David P.
2012-12-01
The study of contrails and their impact on global climate change requires a cloud model that statistically represents contrail radiative properties. In this study, the microphysical properties of global contrails are statistically analyzed using collocated Moderate Resolution Imaging Spectroradiometer (MODIS) and Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) observations. The MODIS contrail pixels are detected using an automated contrail detection algorithm and a manual technique using the brightness temperature differences between the MODIS 11 and 12 μm channels. The scattering and absorption properties of typical contrail ice crystals are used to determine an appropriate contrail model to minimize the uncertainties arising from the assumptions in a particular cloud model. The depolarization ratio is simulated with a variety of ice crystal habit fractions and matched to the collocated MODIS and CALIOP observations. The contrail habit fractions are determined and used to compute the bulk-scattering properties of contrails. A parameterization of shortwave and longwave contrail optical properties is developed for the spectral bands of the Rapid Radiative Transfer Model (RRTM). The contrail forcing at the top of the atmosphere is investigated using the RRTM and compared with spherical and hexagonal ice cloud models. Contrail forcing is overestimated when spherical ice crystals are used to represent contrails, but if a hexagonal ice cloud model is used, the forcing is underestimated for small particles and overestimated for large particles in comparison to the contrail model developed in this study.
Cloud parameterization for climate modeling - Status and prospects
NASA Technical Reports Server (NTRS)
Randall, David A.
1989-01-01
The current status of cloud parameterization research is reviewed. It is emphasized that the upper tropospheric stratiform clouds associated with deep convection are both physically important and poorly parameterized in current models. Emerging parameterizations are described in general terms, with emphasis on prognostic cloud water and fractional cloudiness, and how these relate to the problem just mentioned.
Parameterization of cloud effects on the absorption of solar radiation
NASA Technical Reports Server (NTRS)
Davies, R.
1983-01-01
A radiation parameterization for the NASA Goddard climate model was developed, tested, and implemented. Interactive and off-hire experiments with the climate model to determine the limitations of the present parameterization scheme are summarized. The parameterization of Cloud absorption in terms of solar zeith angle, column water vapors about the cloud top, and cloud liquid water content is discussed.
Numerical Archetypal Parameterization for Mesoscale Convective Systems
NASA Astrophysics Data System (ADS)
Yano, J. I.
2015-12-01
Vertical shear tends to organize atmospheric moist convection into multiscale coherent structures. Especially, the counter-gradient vertical transport of horizontal momentum by organized convection can enhance the wind shear and transport kinetic energy upscale. However, this process is not represented by traditional parameterizations. The present paper sets the archetypal dynamical models, originally formulated by the second author, into a parameterization context by utilizing a nonhydrostatic anelastic model with segmentally-constant approximation (NAM-SCA). Using a two-dimensional framework as a starting point, NAM-SCA spontaneously generates propagating tropical squall-lines in a sheared environment. A high numerical efficiency is achieved through a novel compression methodology. The numerically-generated archetypes produce vertical profiles of convective momentum transport that are consistent with the analytic archetype.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Aerosol water parameterization: a single parameter framework
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Abdelkader, M.; Klingmüller, K.; Xu, L.; Penner, J. E.; Fountoukis, C.; Nenes, A.; Lelieveld, J.
2015-11-01
We introduce a framework to efficiently parameterize the aerosol water uptake for mixtures of semi-volatile and non-volatile compounds, based on the coefficient, νi. This solute specific coefficient was introduced in Metzger et al. (2012) to accurately parameterize the single solution hygroscopic growth, considering the Kelvin effect - accounting for the water uptake of concentrated nanometer sized particles up to dilute solutions, i.e., from the compounds relative humidity of deliquescence (RHD) up to supersaturation (Köhler-theory). Here we extend the νi-parameterization from single to mixed solutions. We evaluate our framework at various levels of complexity, by considering the full gas-liquid-solid partitioning for a comprehensive comparison with reference calculations using the E-AIM, EQUISOLV II, ISORROPIA II models as well as textbook examples. We apply our parameterization in EQSAM4clim, the EQuilibrium Simplified Aerosol Model V4 for climate simulations, implemented in a box model and in the global chemistry-climate model EMAC. Our results show: (i) that the νi-approach enables to analytically solve the entire gas-liquid-solid partitioning and the mixed solution water uptake with sufficient accuracy, (ii) that, e.g., pure ammonium nitrate and mixed ammonium nitrate - ammonium sulfate mixtures can be solved with a simple method, and (iii) that the aerosol optical depth (AOD) simulations are in close agreement with remote sensing observations for the year 2005. Long-term evaluation of the EMAC results based on EQSAM4clim and ISORROPIA II will be presented separately.
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
Implicit Shape Parameterization for Kansei Design Methodology
NASA Astrophysics Data System (ADS)
Nordgren, Andreas Kjell; Aoyama, Hideki
Implicit shape parameterization for Kansei design is a procedure that use 3D-models, or concepts, to span a shape space for surfaces in the automotive field. A low-dimensional, yet accurate shape descriptor was found by Principal Component Analysis of an ensemble of point-clouds, which were extracted from mesh-based surfaces modeled in a CAD-program. A theoretical background of the procedure is given along with step-by-step instructions for the required data-processing. The results show that complex surfaces can be described very efficiently, and encode design features by an implicit approach that does not rely on error-prone explicit parameterizations. This provides a very intuitive way to explore shapes for a designer, because various design features can simply be introduced by adding new concepts to the ensemble. Complex shapes have been difficult to analyze with Kansei methods due to the large number of parameters involved, but implicit parameterization of design features provides a low-dimensional shape descriptor for efficient data collection, model-building and analysis of emotional content in 3D-surfaces.
Parameterization Impacts on Linear Uncertainty Calculation
NASA Astrophysics Data System (ADS)
Fienen, M. N.; Doherty, J.; Reeves, H. W.; Hunt, R. J.
2009-12-01
Efficient linear calculation of model prediction uncertainty can be an insightful diagnostic metric for decision-making. Specifically, the contributions of parameter uncertainty or the location and type of data to prediction uncertainty can be used to evaluate which types of information are most valuable. Information that most significantly reduces prediction uncertainty can be considered to have greater worth. Prediction uncertainty is commonly calculated including or excluding specific information and compared to a base scenario. The quantitative difference in uncertainty with or without the information is indicative of that information's worth in the decision-making process. These results can be calculated at many hypothetical locations to guide network design (i.e., where to install new wells/stream gages/etc.) or used to indicate which parameters are the most important to understand thus likely candidates for future characterization work. We examine a hypothetical case in which an inset model is created from a large regional model in order to better represent a surface stream network and make predictions of head near and flux in a stream due to installation and pumping of a large well near a stream headwater. Parameterization and edge boundary conditions are inherited from the regional model, the simple act of refining discretization and stream geometry shows improvement in the representation of the streams. Even visual inspection of the simulated head field highlights the need to recalibrate and potentially re-parametrize the inset model. A network of potential head observations is evaluated and contoured in the shallowest two layers of the six-layer model to assess their worth in both predicting flux at a specific gage, and head at a specific location near the stream. Three hydraulic conductivity parameterization scenarios are evaluated: using a single multiplier on hydraulic conductivity acting on the inherited hydraulic conductivity zonation using; the
NASA Astrophysics Data System (ADS)
Vidot, Jérôme; Baran, Anthony J.; Brunel, Pascal
2015-07-01
A new ice cloud optical property database in the thermal infrared has been parameterized for the RTTOV radiative transfer model. The Self-Consistent Scattering Model (SCSM) database is based on an ensemble model of ice crystals and a parameterization of the particle size distribution. This convolution can predict the radiative properties of cirrus without the need of a priori information on the ice particle shape and an estimate of the ice crystal effective dimension. The ice cloud optical properties are estimated through linear parameterizations of ambient temperature and ice water content. We evaluate the new parameterization against existing parameterizations used in RTTOV. We compare infrared observations from Imaging Infrared Radiometer, on board CALIPSO, against RTTOV simulations of the observations. The simulations are performed using two different products of ice cloud profiles, retrieved from the synergy between space-based radar and lidar observations. These are the 2C-ICE and DARDAR products. We optimized the parameterization by testing different SCSM databases, derived from different shapes of the particle size distribution, and weighting the volume extinction coefficient of the ensemble model. By selecting a large global data set of ice cloud profiles of visible optical depths between 0.03 and 4, we found that the simulations, based on the optimized SCSM database parameterization, reproduces the observations with a mean bias of only 0.43 K and a standard deviation of 6.85 K. The optimized SCSM database parameterization can also be applied to any other radiative transfer model.
New Parameterization of Neutron Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.
1997-01-01
Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.
Lightning parameterization in a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Farley, Richard D.; Wu, Gang
1988-01-01
The parameterization of an intracloud lightning discharge has been implemented in our Storm Electrification Model. The initiation, propagation direction, termination and charge redistribution of the discharge are approximated assuming overall charge neutrality. Various simulations involving differing amounts of charge transferred have been done. The effects of the lightning-produced ions on the hydrometeor charges, electric field components and electrical energy depend strongly on the charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge show favorable agreement.
Parameterized BLOSUM Matrices for Protein Alignment.
Song, Dandan; Chen, Jiaxing; Chen, Guang; Li, Ning; Li, Jin; Fan, Jun; Bu, Dongbo; Li, Shuai Cheng
2015-01-01
Protein alignment is a basic step for many molecular biology researches. The BLOSUM matrices, especially BLOSUM62, are the de facto standard matrices for protein alignments. However, after widely utilization of the matrices for 15 years, programming errors were surprisingly found in the initial version of source codes for their generation. And amazingly, after bug correction, the "intended" BLOSUM62 matrix performs consistently worse than the "miscalculated" one. In this paper, we find linear relationships among the eigenvalues of the matrices and propose an algorithm to find optimal unified eigenvectors. With them, we can parameterize matrix BLOSUMx for any given variable x that could change continuously. We compare the effectiveness of our parameterized isentropic matrix with BLOSUM62. Furthermore, an iterative alignment and matrix selection process is proposed to adaptively find the best parameter and globally align two sequences. Experiments are conducted on aligning 13,667 families of Pfam database and on clustering MHC II protein sequences, whose improved accuracy demonstrates the effectiveness of our proposed method. PMID:26357279
A natural spline interpolation and exponential parameterization
NASA Astrophysics Data System (ADS)
Kozera, R.; Wilkołazka, M.
2016-06-01
We consider here a natural spline interpolation based on reduced data and the so-called exponential parameterization (depending on parameter λ ∈ [0, 1]). In particular, the latter is studied in the context of the trajectory approximation in arbitrary euclidean space. The term reduced data refers to an ordered collection of interpolation points without provision of the corresponding knots. The numerical verification of the intrinsic asymptotics α(λ) in γ approximation by natural spline γ^3'N is conducted here for regular and sufficiently smooth curve γ sampled more-or-less uniformly. We select in this paper the substitutes for the missing knots according to the exponential parameterization. The outcomes of the numerical tests manifest sharp linear convergence orders α(λ) = 1, for all λ ∈ [0, 1). In addition, the latter results in unexpected left-hand side dis-continuity at λ = 1, since as shown again here a sharp quadratic order α(1) = 2 prevails. Remarkably, the case of α(1)=2 (derived for reduced data) coincides with the well-known asymptotics established for a natural spline to fit non-reduced data determined by the sequence of interpolation points supplemented with the corresponding knots (see e.g. [1]).
Mixing parameterizations in ocean climate modeling
NASA Astrophysics Data System (ADS)
Moshonkin, S. N.; Gusev, A. V.; Zalesny, V. B.; Byshev, V. I.
2016-03-01
Results of numerical experiments with an eddy-permitting ocean circulation model on the simulation of the climatic variability of the North Atlantic and the Arctic Ocean are analyzed. We compare the ocean simulation quality with using different subgrid mixing parameterizations. The circulation model is found to be sensitive to a mixing parametrization. The computation of viscosity and diffusivity coefficients by an original splitting algorithm of the evolution equations for turbulence characteristics is found to be as efficient as traditional Monin-Obukhov parameterizations. At the same time, however, the variability of ocean climate characteristics is simulated more adequately. The simulation of salinity fields in the entire study region improves most significantly. Turbulent processes have a large effect on the circulation in the long-term through changes in the density fields. The velocity fields in the Gulf Stream and in the entire North Atlantic Subpolar Cyclonic Gyre are reproduced more realistically. The surface level height in the Arctic Basin is simulated more faithfully, marking the Beaufort Gyre better. The use of the Prandtl number as a function of the Richardson number improves the quality of ocean modeling.
A subgrid parameterization scheme for precipitation
NASA Astrophysics Data System (ADS)
Turner, S.; Brenguier, J.-L.; Lac, C.
2011-07-01
With increasing computing power, the horizontal resolution of numerical weather prediction (NWP) models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed PDF of relative humidity spatial variability within the grid, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II) and fair weather cumulus (RICO) and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.
A subgrid parameterization scheme for precipitation
NASA Astrophysics Data System (ADS)
Turner, S.; Brenguier, J.-L.; Lac, C.
2012-04-01
With increasing computing power, the horizontal resolution of numerical weather prediction (NWP) models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF) of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II) and fair weather cumulus (RICO) and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.
A new parameterization of spectral and broadband ocean surface albedo.
Jin, Zhonghai; Qiao, Yanli; Wang, Yingjian; Fang, Yonghua; Yi, Weining
2011-12-19
A simple yet accurate parameterization of spectral and broadband ocean surface albedo has been developed. To facilitate the parameterization and its applications, the albedo is parameterized for the direct and diffuse incident radiation separately, and then each of them is further divided into two components: the contributions from surface and water, respectively. The four albedo components are independent of each other, hence, altering one will not affect the others. Such a designed parameterization scheme is flexible for any future update. Users can simply replace any of the adopted empirical formulations (e.g., the relationship between foam reflectance and wind speed) as desired without a need to change the parameterization scheme. The parameterization is validated by in situ measurements and can be easily implemented into a climate or radiative transfer model. PMID:22274228
Parameterization of Star-Shaped Volumes Using Green's Functions
NASA Astrophysics Data System (ADS)
Xia, Jiazhi; He, Ying; Han, Shuchu; Fu, Chi-Wing; Luo, Feng; Gu, Xianfeng
Parameterizations have a wide range of applications in computer graphics, geometric design and many other fields of science and engineering. Although surface parameterizations have been widely studied and are well developed, little research exists on the volumetric data due to the intrinsic difficulties in extending surface parameterization algorithms to volumetric domain. In this paper, we present a technique for parameterizing star-shaped volumes using the Green's functions. We first show that the Green's function on the star shape has a unique critical point. Then we prove that the Green's functions can induce a diffeomorphism between two star-shaped volumes. We develop algorithms to parameterize star shapes to simple domains such as balls and star-shaped polycubes, and also demonstrate the volume parameterization applications: volumetric morphing, anisotropic solid texture transfer and GPU-based volumetric computation.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
Mallia, Rupananda J; McVeigh, Patrick Z; Veilleux, Israel; Wilson, Brian C
2012-07-01
As molecular imaging moves towards lower detection limits, the elimination of endogenous background signals becomes imperative. We present a facile background-suppression technique that specifically segregates the signal from surface-enhanced Raman scattering (SERS)-active nanoparticles (NPs) from the tissue autofluorescence background in vivo. SERS NPs have extremely narrow spectral peaks that do not overlap significantly with endogenous Raman signals. This can be exploited, using specific narrow-band filters, to image picomolar (pM) concentrations of NPs against a broad tissue autofluorescence background in wide-field mode, with short integration times that compare favorably with point-by-point mapping typically used in SERS imaging. This advance will facilitate the potential applications of SERS NPs as contrast agents in wide-field multiplexed biomarker-targeted imaging in vivo. PMID:22894500
Parameterizing mesoscale and large-scale ice clouds in general circulation models
NASA Technical Reports Server (NTRS)
Donner, Leo J.
1990-01-01
The paper discusses GCM parameterizations for two types of ice clouds: (1) ice clouds formed by large-scale lifting, often of limited vertical extent but usually of large-scale horizontal extent; and (2) ice clouds formed as anvils in convective systems, often of moderate vertical extent but of mesoscale size horizontally. It is shown that the former type of clouds can be parameterized with reference to an equilibrium between ice generation by deposition from vapor, and ice removal by crystal settling. The same mechanisms operate in the mesoscale clouds, but the ice content in these cases is considered to be more closely linked to the moisture supplied to the anvil by cumulus towers. It is shown that a GCM can simulate widespread ice clouds of both types.
Optika : a GUI framework for parameterized applications.
Nusbaum, Kurtis L.
2011-06-01
In the field of scientific computing there are many specialized programs designed for specific applications in areas such as biology, chemistry, and physics. These applications are often very powerful and extraordinarily useful in their respective domains. However, some suffer from a common problem: a non-intuitive, poorly-designed user interface. The purpose of Optika is to address this problem and provide a simple, viable solution. Using only a list of parameters passed to it, Optika can dynamically generate a GUI. This allows the user to specify parameters values in a fashion that is much more intuitive than the traditional 'input decks' used by some parameterized scientific applications. By leveraging the power of Optika, these scientific applications will become more accessible and thus allow their designers to reach a much wider audience while requiring minimal extra development effort.
The Doppler spread theory and parameterization revisited
NASA Astrophysics Data System (ADS)
Hines, Colin O.
2004-07-01
The author's earlier Doppler Spread Theory (DST) and Doppler Spread Parameterization (DSP) are revisited with a new understanding of the dichotomous roles played by nonlinearity in Eulerian and Lagrangian coordinates, respectively. An embryo Lagrangian DST is introduced and employed to assess the original DST. Earlier results near the Eulerian spectral peak are found to be reasonably valid, whereas those at greater vertical wavenumber are confirmed to have produced too much spreading. The earlier DSP is found to need little if any change, though specific values are suggested for its two most important ``fudge factors''. In a more general context, the continuing identity of a wave undergoing certain nonlinear interactions with other waves is discussed.
Toward parameterization of the stable boundary layer
NASA Technical Reports Server (NTRS)
Wetzel, P. J.
1982-01-01
Wangara data is used to examine the depth of the nocturnal boundary layer (NBL) and the height to which surface-linked turbulence extends. It is noted that a linearity of virtual temperature profiles has been found to extend up to a significant portion of the NBL, and then diverge where the wind shear rides over the surface-induced turbulence. A series of Richardson numbers are examined for varying degrees of turbulence and the significant cooling region is observed to have greater depth than the depth of the linear relationship layer. A three-layer parameterization of the thermodynamic structure of the NBL is developed so that a system of five equations must be solved when the wind velocity profile and the temperature at the surface are known. A correlation between the bulk Richardson number and the depth of the linear layer was found to be 0.89.
Universal Parameterization of Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Cucinotta, Francis A.; Wilson, John W.
1997-01-01
This paper presents a simple universal parameterization of total reaction cross sections for any system of colliding nuclei that is valid for the entire energy range from a few AMeV to a few AGeV. The universal picture presented here treats proton-nucleus collision as a special case of nucleus-nucleus collision, where the projectile has charge and mass number of one. The parameters are associated with the physics of the collision system. In general terms, Coulomb interaction modifies cross sections at lower energies, and the effects of Pauli blocking are important at higher energies. The agreement between the calculated and experimental data is better than all earlier published results.
Climate impacts of parameterized Nordic Sea overflows
NASA Astrophysics Data System (ADS)
Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.
2010-11-01
A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North
A parameterization of cloud droplet nucleation
Ghan, S.J. ); Chuang, C.; Penner, J.E. )
1993-01-01
Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization.
A Genus Oblivious Approach to Cross Parameterization
Bennett, J C; Pascucci, V; Joy, K I
2008-06-16
In this paper we present a robust approach to construct a map between two triangulated meshes, M and M{prime} of arbitrary and possibly unequal genus. We introduce a novel initial alignment scheme that allows the user to identify 'landmark tunnels' and/or a 'constrained silhouette' in addition to the standard landmark vertices. To describe the evolution of non-landmark tunnels we automatically derive a continuous deformation from M to M{prime} using a variational implicit approach. Overall, we achieve a cross parameterization scheme that is provably robust in the sense that it can map M to M{prime} without constraints on their relative genus. We provide a number of examples to demonstrate the practical effectiveness of our scheme between meshes of different genus and shape.
Cumulus parameterizations in chemical transport models
NASA Astrophysics Data System (ADS)
Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.
1995-12-01
Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the
Parameterization of Heat Transport in a Fjord
NASA Astrophysics Data System (ADS)
Hossainzadeh, S.; Tulaczyk, S. M.
2012-12-01
We aim to improve the coupling in the Regional Arctic System Model (RASM) between the ocean model, Parallel Ocean Program (POP), and the ice sheet model, Community Ice Sheet Model (CISM), by developing a parameterization for the dominant processes in a typical Greenland fjord. The termini of tidewater glaciers and ice shelves may prove to be a critical forcing on outlet glacier mass balance. Recent studies have shown that warm deep water masses have penetrated far up-stream in fjords and sub-ice shelf cavities. We analyze the effects of bottom bathymetry, entrainment rate at the ice face due to freshwater plumes, surface outflow rates, undulating fjord geometries, and open ocean conditions at the fjord mouth on heat transport up-fjord. The fjord is represented as a two-layer (stratified) open channel flow with a substantial and sudden geometric widening at the mouth. Horizontal force balances as well as mass, salt and heat continuity relations of the upper layer provides an analytical solution for the velocity and thickness distribution along-fjord. Subsequently, the sensitivity of the bottom layer's up-fjord flow and heat transport to the ice face is determined and forms the basis of the parameterization of along-fjord processes. Open ocean scenarios (temperature, salinity and velocity profiles), typical of Arctic oceanographic conditions on the Greenland shelf, are prescribed from results of a coupled ocean-sea ice model configured at a regional scale for the pan-Arctic domain. The model was spun up for 48 years and forced by daily averaged atmospheric reanalysis data from the European Centre for Medium-Range Weather Forecasts. We validate this data from several decades-long time series of in situ data from the Gulf of Alaska and West Greenland. Our results provide ice melt rates which agree with current estimates.
Born approximation, scattering, and algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun
2015-05-01
In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Parameterized reduced order modeling of misaligned stacked disks rotor assemblies
NASA Astrophysics Data System (ADS)
Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe
2011-01-01
Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this goal at affordable computational costs. It is shown that the disks misalignment leads to significant changes in nominal system properties that manifest themselves as additional blocks coupling neighboring spatial harmonics in Fourier space. Consequently, the misalignment effects can no longer be accurately modeled as equivalent forces applied to a nominal unperturbed system. The fact that the mode shapes become heavily distorted by extra harmonic content renders the nominal modal projection-based methods inaccurate and thus numerically ineffective in the context of repeated analysis of multiple misalignment realizations. The significant numerical bottleneck is removed by employing an orthogonal projection onto the subspace spanned by first few Fourier harmonic basis vectors. The projected highly sparse systems are shown to accurately approximate the specific misalignment effects, to be inexpensive to solve using direct sparse methods and easy to parameterize with a small set of measurable eccentricity and tilt angle parameters. Selected numerical examples on an industrial scale model are presented to illustrate the accuracy and efficiency of the algorithm implementation.
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
Adaptive resolution refinement for high-fidelity continuum parameterizations
Anderson, J.W.; Khamayseh, A.; Jean, B.A.
1996-10-01
This paper describes an algorithm the adaptively samples a parametric continuum so that a fidelity metric is satisfied. Using the divide-and-conquer strategy of adaptive sampling eliminates the guesswork of traditional uniform parameterization techniques. The space and time complexity of parameterization are increased in a controllable manner so that a desired fidelity is obtained.
Benchmarking longwave multiple scattering in cirrus environments
NASA Astrophysics Data System (ADS)
Kuo, C.; Feldman, D.; Yang, P.; Flanner, M.; Huang, X.
2015-12-01
Many global climate models currently assume that longwave photons are non-scattering in clouds, and also have overly simplistic treatments of surface emissivity. Multiple scattering of longwave radiation and non-unit emissivity could lead to substantial discrepancies between the actual Earth's radiation budget and its parameterized representation in the infrared, especially at wavelengths longer than 15 µm. The evaluation of the parameterization of longwave spectral multiple scattering in radiative transfer codes for global climate models is critical and will require benchmarking across a wide range atmospheric conditions with more accurate, though computationally more expensive, multiple scattering models. We therefore present a line-by-line radiative transfer solver that includes scattering, run on a supercomputer from the National Energy Research Scientific Computing that exploits the embarrassingly parallel nature of 1-D radiative transfer solutions with high effective throughput. When paired with an advanced ice-particle optical property database with spectral values ranging from the 0.2 to 100 μm, a particle size and habit distribution derived from MODIS Collection 6, and a database for surface emissivity which extends to 100 μm, this benchmarking result can densely sample the thermodynamic and condensate parameter-space, and therefore accelerate the development of an advanced infrared radiative parameterization for climate models, which could help disentangle forcings and feedbacks in CMIP6.
Parameterizing Size Distribution in Ice Clouds
DeSlover, Daniel; Mitchell, David L.
2009-09-25
PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice
Recognizing parameterized three-dimensional objects
NASA Astrophysics Data System (ADS)
Goldberg, Robert R.
1994-10-01
Complex object models require multiple components affixed to each other in specific and variable geometric paths. This paper expands upon earlier research to present an unified approach for relating components' coordinate systems to each other in the same model. Particularly, we show that rather complex relationships such as ball joints and geometric transformations about arbitrary axes are no more complicated than describing the model base in terms of the camera coordinate system. These require only simple rotations and translations about the major axes. This modeling approach was next integrated with a verification module of a model based vision system. We recovered from a single 2D image the original model and camera parameters that would align the projected model edges with the image segments by solving a nonlinear least squares system. A specific example of the theory is implemented. A lamp head is seceded to its base by a ball joint with three parameters of rotational freedom. From a wide range of initial guess error, the numerical system converged to the correct set of model and camera parameters. Thus, the theory of parameterized affixments and the numerical implementation to obtain these values from 2D images will aid in associated recognition tasks and in real-time tracking of complex conglomerate objects.
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Parameterizing deep convection using the assumed probability density function method
NASA Astrophysics Data System (ADS)
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-01
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Carbody structural lightweighting based on implicit parameterized model
NASA Astrophysics Data System (ADS)
Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen
2014-05-01
Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
Parameterization of cloud glaciation by atmospheric dust
NASA Astrophysics Data System (ADS)
Nickovic, Slobodan; Cvetkovic, Bojan; Madonna, Fabio; Pejanovic, Goran; Petkovic, Slavko
2016-04-01
The exponential growth of research interest on ice nucleation (IN) is motivated, inter alias, by needs to improve generally unsatisfactory representation of cold cloud formation in atmospheric models, and therefore to increase the accuracy of weather and climate predictions, including better forecasting of precipitation. Research shows that mineral dust significantly contributes to cloud ice nucleation. Samples of residual particles in cloud ice crystals collected by aircraft measurements performed in the upper tropopause of regions distant from desert sources indicate that dust particles dominate over other known ice nuclei such as soot and biological particles. In the nucleation process, dust chemical aging had minor effects. The observational evidence on IN processes has substantially improved over the last decade and clearly shows that there is a significant correlation between IN concentrations and the concentrations of coarser aerosol at a given temperature and moisture. Most recently, due to recognition of the dominant role of dust as ice nuclei, parameterizations for immersion and deposition icing specifically due to dust have been developed. Based on these achievements, we have developed a real-time forecasting coupled atmosphere-dust modelling system capable to operationally predict occurrence of cold clouds generated by dust. We have been thoroughly validated model simulations against available remote sensing observations. We have used the CNR-IMAA Potenza lidar and cloud radar observations to explore the model capability to represent vertical features of the cloud and aerosol vertical profiles. We also utilized the MSG-SEVIRI and MODIS satellite data to examine the accuracy of the simulated horizontal distribution of cold clouds. Based on the obtained encouraging verification scores, operational experimental prediction of ice clouds nucleated by dust has been introduced in the Serbian Hydrometeorological Service as a public available product.
Parameterization of cirrus optical depth and cloud fraction
Soden, B.
1995-09-01
This research illustrates the utility of combining satellite observations and operational analysis for the evaluation of parameterizations. A parameterization based on ice water path (IWP) captures the observed spatial patterns of tropical cirrus optical depth. The strong temperature dependence of cirrus ice water path in both the observations and the parameterization is probably responsible for the good correlation where it exists. Poorer agreement is found in Southern Hemisphere mid-latitudes where the temperature dependence breaks down. Uncertainties in effective radius limit quantitative validation of the parameterization (and its inclusion into GCMs). Also, it is found that monthly mean cloud cover can be predicted within an RMS error of 10% using ECMWF relative humidity corrected by TOVS Upper Troposphere Humidity. 1 ref., 2 figs.
Some applications of parameterized Picard-Vessiot theory
NASA Astrophysics Data System (ADS)
Mitschi, C.
2016-02-01
This is an expository article describing some applications of parameterized Picard-Vessiot theory. This Galois theory for parameterized linear differential equations was Cassidy and Singer's contribution to an earlier volume dedicated to the memory of Andrey Bolibrukh. The main results we present here were obtained for families of ordinary differential equations with parameterized regular singularities in joint work with Singer. They include parametric versions of Schlesinger's theorem and of the weak Riemann-Hilbert problem as well as an algebraic characterization of a special type of monodromy evolving deformations illustrated by the classical Darboux-Halphen equation. Some of these results have recently been applied by different authors to solve the inverse problem of parameterized Picard-Vessiot theory, and were also generalized to irregular singularities. We sketch some of these results by other authors. The paper includes a brief history of the Darboux-Halphen equation as well as an appendix on differentially closed fields.
... This does not cause problems most of the time. Alternative Names Adenoidectomy; Removal of adenoid glands Images Adenoid removal - series References Wetmore RF. Tonsils and adenoids. In: Kliegman ...
Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2007-12-01
This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.
Faster Parameterized Algorithms for Minor Containment
NASA Astrophysics Data System (ADS)
Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.
The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.
Brydegaard, Mikkel
2015-01-01
In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna) has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors. PMID:26295706
Brydegaard, Mikkel
2015-01-01
In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna) has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors. PMID:26295706
Paluszkiewicz, T.; Hibler, L.F.; Romea, R.D.
1995-01-01
The current generation of ocean general circulation models (OGCMS) uses a convective adjustment scheme to remove static instabilities and to parameterize shallow and deep convection. In simulations used to examine climate-related scenarios, investigators found that in the Arctic regions, the OGCM simulations did not produce a realistic vertical density structure, did not create the correct quantity of deep water, and did not use a time-scale of adjustment that is in agreement with tracer ages or observations. A possible weakness of the models is that the convective adjustment scheme does not represent the process of deep convection adequately. Consequently, a penetrative plume mixing scheme has been developed to parameterize the process of deep open-ocean convection in OGCMS. This new deep convection parameterization was incorporated into the Semtner and Chervin (1988) OGCM. The modified model (with the new parameterization) was run in a simplified Nordic Seas test basin: under a cyclonic wind stress and cooling, stratification of the basin-scale gyre is eroded and deep mixing occurs in the center of the gyre. In contrast, in the OGCM experiment that uses the standard convective adjustment algorithm, mixing is delayed and is wide-spread over the gyre.
How uncertain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Gharari, S.; Gupta, H. V.; Fenicia, F.; Matgen, P.; Savenije, H.
2015-12-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering
Optimal Aerosol Parameterization for Remote Sensing Retrievals
NASA Technical Reports Server (NTRS)
Newchurch, Michael J.
2004-01-01
discrepancy in the lower stratosphere is attributable to natural variation, and is also seen in comparisons between lidar and ozonesonde measurements. NO2 profiles obtained with our algorithm were compared to those obtained through the SAGE III operational algorithm and exhibited differences of 20 - 40%. Our retrieved profiles agree with the HALOE NO2 measurements significantly better than those of the operational retrieval. In other work (described below), we are extending our aerosol retrievals into the infrared regime and plan to perform retrievals from combined uv-visible-infrared spectra. This work will allow us to use the spectra to derive the size and composition of aerosols, and we plan to employ our algorithms in the analysis of PSC spectra. We are presently also developing a limb-scattering algorithm to retrieve aerosol data from limb measurements of solar scattered radiation.
Functional parameterization for hydraulic conductivity inversion with uncertainty quantification
NASA Astrophysics Data System (ADS)
Jiao, Jianying; Zhang, Ye
2015-05-01
Functional inversion based on local approximate solutions (LAS) is developed for steady-state flow in heterogeneous aquifers. The method employs a set of LAS of flow to impose spatial continuity of hydraulic head and Darcy fluxes in the solution domain, which are conditioned to limited measurements. Hydraulic conductivity is first parameterized as piecewise continuous, which requires the addition of a smoothness constraint to reduce inversion artifacts. Alternatively, it is formulated as piecewise constant, for which the smoothness constraint is not required, but the data requirement is much higher. Success of the inversion with both parameterizations is demonstrated for both one-dimensional synthetic examples and an oil-field permeability profile. When measurement errors are increased, estimation becomes less accurate but the solution is stable, i.e., estimation errors remain bounded. Compared to piecewise constant parameterization, piecewise continuous parameterization leads to more stable and accurate inversion. Moreover, conductivity variation can also be captured at two spatial scales reflecting sub-facies smooth-varying heterogeneity as well as abrupt changes at facies boundaries. By combining inversion with geostatistical simulation, uncertainty in the estimated conductivity and the hydraulic head field can be quantified. For a given measurement dataset, inversion accuracy and estimation uncertainty with the piecewise continuous parameterization is not sensitive to increasing conductivity contrast.
Parameterizing the power spectrum: Beyond the truncated Taylor expansion
Abazajian, Kevork; Kadota, Kenji; Stewart, Ewan D.; /KAIST, Taejon /Canadian Inst. Theor. Astrophys.
2005-07-01
The power spectrum is traditionally parameterized by a truncated Taylor series: ln P(k) = ln P{sub *} + (n{sub *} - 1) ln(k/k{sub *}) + 1/2 n'{sub *} ln{sup 2} (k/k{sub *}). It is reasonable to truncate the Taylor series if |n'{sub *} ln(k/k{sub *})| << |n{sub *} - 1|, but it is not if |n'{sub *} ln(k/k{sub *})| {approx}> |n{sub *} - 1|. We argue that there is no good theoretical reason to prefer |n'{sub *}| << |n{sub *} - 1|, and show that current observations are consistent with |n*{sub *} ln(k/k{sub *})| {approx} |n{sub *} - 1| even for |ln(k/k{sub *})| {approx} 1. Thus, there are regions of parameter space, which are both theoretically and observationally relevant, for which the traditional truncated Taylor series parameterization is inconsistent, and hence it can lead to incorrect parameter estimations. Motivated by this, we propose a simple extension of the traditional parameterization, which uses no extra parameters, but that, unlike the traditional approach, covers well motivated inflationary spectra with |n'{sub *}| {approx} |n{sub *} - 1|. Our parameterization therefore covers not only standard-slow-roll inflation models but also a much wider class of inflation models. We use this parameterization to perform a likelihood analysis for the cosmological parameters.
Uncertainties in gas exchange parameterization during the SAGE dual-tracer experiment
NASA Astrophysics Data System (ADS)
Smith, Murray J.; Ho, David T.; Law, Cliff S.; McGregor, John; Popinet, Stéphane; Schlosser, Peter
2011-03-01
A dual tracer experiment was carried out during the SAGE experiment using the inert tracers SF 6 and 3He, in order to determine the gas transfer velocity, k, at high wind speeds in the Southern Ocean. Wind speed/gas exchange parameterization is characterised by significant variability and we examine the major measurement uncertainties that contribute to that scatter. Correction for the airflow distortion over the research vessel, as determined by computational fluid dynamics (CFD) modelling, had the effect of increasing the calculated value of k by 30%. On the short time scales of such experiments, the spatial variability of the wind field resulted in differences between ship and satellite QuikSCAT winds, which produced significant differences in transfer velocity. With such variability between wind estimates, the comparison between gas exchange parameterizations from diverse experiments should clearly be made on the basis of the same wind product. Uncertainty in mixed layer depth of ˜10% arose from mixed layer deepening at high wind speed and limited resolution of vertical sampling. However the assumption of equal mixing of the two tracers is borne out by the experiment. Two dual tracer releases were carried out during SAGE, and showed no significant difference in transfer velocities using QuikSCAT winds, despite the differences in wind history. In the SAGE experiment, duration limitation on the development of waves was shown to be an important factor for Southern Ocean waves, despite the presence of long fetches.
Parameterization of and Brine Storage in MOR Hydrothermal Systems
NASA Astrophysics Data System (ADS)
Hoover, J.; Lowell, R. P.; Cummings, K. B.
2009-12-01
Single-pass parameterized models of high-temperature hydrothermal systems at oceanic spreading centers use observational constraints such as vent temperature, heat output, vent field area, and the area of heat extraction from the sub-axial magma chamber to deduce fundamental hydrothermal parameters such as total mass flux Q, bulk permeability k, and the thickness of the conductive boundary layer at the base of the system, δ. Of the more than 300 known systems, constraining data are available for less than 10%. Here we use the single pass model to estimate Q, k, and δ for all the seafloor hydrothermal systems for which the constraining data are available. Mean values of Q, k, and δ are 170 kg/s, 5.0x10-13 m2, and 20 m, respectively; which is similar to results obtained from the generic model. There is no apparent correlation with spreading rate. Using observed vent field lifetimes, the rate of magma replenishment can also be calculated. Essentially all high-temperature hydrothermal systems at oceanic spreading centers undergo phase separation, yielding a low chlorinity vapor and a high salinity brine. Some systems such as the Main Endeavour Field on the Juan de Fuca Ridge and the 9°50’N sites on the East Pacific Rise vent low chlorinity vapor for many years, while the high density brine remains sequestered beneath the seafloor. In an attempt to further understand the brine storage at the EPR, we used the mass flux Q determined above, time series of vent salinity and temperature, and the depth of the magma chamber to determine the rate of brine production at depth. We found thicknesses ranging from 0.32 meters to ~57 meters over a 1 km2 area from 1994-2002. These calculations suggest that brine maybe being stored within the conductive boundary layer without a need for lateral transport or removal by other means. We plan to use the numerical code FISHES to further test this idea.
A numerical model of aerosol scavenging: Part 1, Microphysics parameterization
Molenkamp, C.R.; Bradley, M.M.
1991-09-01
We have developed a three-dimensional numerical model (OCTET) to simulate the dynamics and microphysics of clouds and the transport, diffusion and precipitation scavenging of aerosol particles. In this paper we describe the cloud microphysics and scavenging parameterizations. The representation of cloud microphysics is a bulk- water parameterization which includes water vapor and five types of hydrometeors (cloud droplets, rain drops, ice crystals, snow, and graupel). A parallel parameterization represents the scavenging interactions between pollutant particles and hydrometeors including collection of particles because of condensation nucleation, Brownian and phoretic attachment, and inertial capture, resuspension because of evaporation and sublimation; and transfer interactions where particles collected by one type of hydrometeor are transferred to another type of freezing, melting, accretion, riming and autoconversion.
Cloud-radiation interactions and their parameterization in climate models
NASA Technical Reports Server (NTRS)
1994-01-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Parameterized reduced-order models using hyper-dual numbers.
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.
Isogeometric analysis for parameterized LSM-based structural topology optimization
NASA Astrophysics Data System (ADS)
Wang, Yingjun; Benson, David J.
2016-01-01
In this paper, we present an accurate and efficient isogeometric topology optimization method that integrates the non-uniform rational B-splines based isogeometric analysis and the parameterized level set method for minimal compliance problems. The same NURBS basis functions are used to parameterize the level set function and evaluate the objective function, and therefore the design variables are associated with the control points. The coefficient matrix that parameterizes the level set function is set up by a collocation method that uses the Greville abscissae. The zero-level set boundary is obtained from the interpolation points corresponding to the vertices of the knot spans. Numerical examples demonstrate the validity and efficiency of the proposed method.
Development of a hybrid cloud parameterization for general circulation models
Kao, C.Y.J.; Kristjansson, J.E.; Langley, D.L.
1995-04-01
We have developed a cloud package with state-of-the-art physical schemes that can parameterize low-level stratus or stratocumulus, penetrative cumulus, and high-level cirrus. Such parameterizations will improve cloud simulations in general circulation models (GCMs). The principal tool in this development comprises the physically based Arakawa-Schubert scheme for convective clouds and the Sundqvist scheme for layered, nonconvective clouds. The term {open_quotes}hybrid{close_quotes} addresses the fact that the generation of high-attitude layered clouds can be associated with preexisting convective clouds. Overall, the cloud parameterization package developed should better determine cloud heating and drying effects in the thermodynamic budget, realistic precipitation patterns, cloud coverage and liquid/ice water content for radiation purposes, and the cloud-induced transport and turbulent diffusion for atmospheric trace gases.
Cloud-radiation interactions and their parameterization in climate models
1994-11-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18--20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the. themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth`s surface, and to refine the process models which are used to develop advanced cloud parameterizations.
NASA Astrophysics Data System (ADS)
Gladish, James C.; Duncan, Donald D.
2016-05-01
Liquid crystal variable retarders (LCVRs) are computer-controlled birefringent devices that contain nanometer-sized birefringent liquid crystals (LCs). These devices impart retardance effects through a global, uniform orientation change of the LCs, which is based on a user-defined drive voltage input. In other words, the LC structural organization dictates the device functionality. The LC structural organization also produces a spectral scatter component which exhibits an inverse power law dependence. We investigate LC structural organization by measuring the voltage-dependent LC spectral scattering signature with an integrating sphere and then relate this observable to a fractal-Born model based on the Born approximation and a Von Kármán spectrum. We obtain LCVR light scattering spectra at various drive voltages (i.e., different LC orientations) and then parameterize LCVR structural organization with voltage-dependent correlation lengths. The results can aid in determining performance characteristics of systems using LCVRs and can provide insight into interpreting structural organization measurements.
... the surgical cut is located. Recovery after a laparoscopic procedure is most often quicker, with less pain. Outlook (Prognosis) The outcome is most often good when a single kidney is removed. If both kidneys are removed, ...
... ticks Tickborne diseases abroad Borrelia miyamotoi Borrelia mayonii Tick Removal Recommend on Facebook Tweet Share Compartir If ... a tick quite effectively. How to remove a tick Use fine-tipped tweezers to grasp the tick ...
Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying
2015-01-01
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they
Comparison of Soil Hydraulic Parameterizations for Mesoscale Meteorological Models.
NASA Astrophysics Data System (ADS)
Braun, Frank J.; Schädler, Gerd
2005-07-01
Soil water contents, calculated with seven soil hydraulic parameterizations, that is, soil hydraulic functions together with the corresponding parameter sets, are compared with observational data. The parameterizations include the Campbell/Clapp-Hornberger parameterization that is often used by meteorologists and the van Genuchten/Rawls-Brakensiek parameterization that is widespread among hydrologists. The observations include soil water contents at several soil depths and atmospheric surface data; they were obtained within the Regio Klima Projekt (REKLIP) at three sites in the Rhine Valley in southern Germany and cover up to 3 yr with 10-min temporal resolution. Simulations of 48-h episodes, as well as series of daily simulations initialized anew every 24 h and covering several years, were performed with the “VEG3D” soil-vegetation model in stand-alone mode; furthermore, 48-h episodes were simulated with the model coupled to a one-dimensional atmospheric model. For the cases and soil types considered in this paper, the van Genuchten/Rawls-Brakensiek model gives the best agreement between observed and simulated soil water contents on average. Especially during episodes with medium and high soil water content, the van Genuchten/Rawls-Brakensiek model performs better than the Campbell/Clapp-Hornberger model.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...
Parameterizations in high resolution isopycanl wind-driven ocean models
Jensen, T.G.; Randall, D.A.
1993-01-01
For the CHAMMP project, we proposed to implement and test new numerical schemes, parameterizations of boundary layer flow and development and implement mixed layer physics in an existing isopycnal models. The objectives for the proposed research were; implement the Arakawa and Hsu, scheme in an existing isopycnal model of the Indian Ocean; recode the new model for a highly parallel architecture; determine effects of various parameterizations of islands; determine the correct lateral boundary condition for boundary layer currents, as for instance the Gulf Stream and other western boundary currents.; and incorporate a oceanic mixed layer on top of the isopycnal deep layers. This is, primarily a model development project, with emphasis on determining the influence and parameterization of narrow flows along continents and through chains of small islands on the large scale oceanic circulation, which is resolved by climate models. The new model is based on the multi-layer FSU Indian Ocean model. Our research strategy is to; recode a one-layer version of the Indian Ocean Model for a highly parallel computer; add thermodynamics to a rectangular domain version of the new model; implement the irregular domain from the Indian Ocean Model into the box model; change the numerical scheme for the continuity equation to the scheme proposed by; perform parameterization experiments with various coast line and island geometries. This report discusses project progress for period August 1, 1992 through December 31, 1992.
Validation of an Urban Parameterization in a Mesoscale Model
Leach, M.J.; Chin, H.
2001-07-19
The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.
Overview of an Urban Canopy Parameterization in COAMPS
Leach, M J; Chin, H S
2006-02-09
The Coupled Atmosphere/Ocean Mesoscale Prediction System (COAMPS) model (Hodur, 1997) was developed at the Naval Research Laboratory. COAMPS has been used at resolutions as small as 2 km to study the role of complex topography in generating mesoscale circulation (Doyle, 1997). The model has been adapted for use in the Atmospheric Science Division at LLNL for both research and operational use. The model is a fully, non-hydrostatic model with several options for turbulence parameterization, cloud processes and radiative transfer. We have recently modified the COAMPS code to include building and other urban surfaces effects in the mesoscale model by incorporating an urban canopy parameterization (UCP) (Chin et al., 2005). This UCP is a modification of the original parameterization of (Brown and Williams, 1998), based on Yamada's (1982) forest canopy parameterization and includes modification of the TKE and mean momentum equations, modification of radiative transfer, and an anthropogenic heat source. COAMPS is parallelized for both shared memory (OpenMP) and distributed memory (MPI) architecture.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, G.; Zhang, R.; Tie, X.; Molina, L. T.
2013-05-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere and thence the representation of the HONO sources in chemical transport models (CTMs) is lack of comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, Guohui; Zhang, Renyi; Tie, Xuxie; Molina, Luisa
2013-04-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere; hence the representation of the HONO sources in chemical transport models (CTMs) has lack comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW.
LIU,Y.; DAUM,P.H.; CHAI,S.K.; LIU,F.
2002-02-12
This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments.
Momentum Transport by Cumulus Clouds and its Parameterization.
NASA Astrophysics Data System (ADS)
Zhang, Guang Jun
The effect of cumulus convection on the large -scale momentum field is examined in this thesis. A parameterization scheme is developed to calculate the vertical transport of momentum by cumulus clouds. The effect of the perturbation pressure field induced by cumulus convection on the cloud momentum and its vertical transport is taken into account for the first time. It is shown that a perturbation pressure field is required to balance the irrotational component of the local Coriolis force produced by the interaction of the large-scale flow field with the cumulus-scale circulation. To facilitate quantitative evaluation of the horizontal pressure gradient force across the cloud, a simple cloud model which specifies the dynamic and the thermodynamic structures in cloud is developed. The parameterization scheme is applied to several convective events in the tropics and the midlatitudes. The first case is the average of six convective periods observed in Phase III of GATE. The second one is the numerical simulation of a convective band observed in Phase II of GATE by Soong and Tao (1984). It is shown that the cloud mean wind obtained from the parameterization scheme changes significantly with height if the environmental wind has strong vertical shear. The perturbation pressure gradient force across the cloud plays an important role in changing the cloud mean momentum. The vertical transport of the horizontal momentum by cumulus clouds is parameterized and compared to observations and numerical simulations. Good agreement is found between the computed and the observed/simulated cumulus effects on the momentum field in both cases. The third case is a mesoscale convective complex observed in PRE-STORM. The evolution of the storm is analyzed; and the dynamic and the thermodynamic budgets are computed. Comparison between the residuals of the momentum budgets and the cumulus effects from the parameterization again shows good agreement. Sensitivity tests are performed to
NASA Astrophysics Data System (ADS)
Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.
2014-12-01
Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.
Zhang, Yang; Zhang, Xin; Wang, Kai; He, Jian; Leung, Lai-Yung R.; Fan, Jiwen; Nenes, Athanasios
2015-07-22
Aerosol activation into cloud droplets is an important process that governs aerosol indirect effects. The advanced treatment of aerosol activation by Fountoukis and Nenes (2005) and its recent updates, collectively called the FN series, have been incorporated into a newly developed regional coupled climate-air quality model based on the Weather Research and Forecasting model with the physics package of the Community Atmosphere Model version 5 (WRF-CAM5) to simulate aerosol-cloud interactions in both resolved and convective clouds. The model is applied to East Asia for two full years of 2005 and 2010. A comprehensive model evaluation is performed for model predictions of meteorological, radiative, and cloud variables, chemical concentrations, and column mass abundances against satellite data and surface observations from air quality monitoring sites across East Asia. The model performs overall well for major meteorological variables including near-surface temperature, specific humidity, wind speed, precipitation, cloud fraction, precipitable water, downward shortwave and longwave radiation, and column mass abundances of CO, SO2, NO2, HCHO, and O3 in terms of both magnitudes and spatial distributions. Larger biases exist in the predictions of surface concentrations of CO and NOx at all sites and SO2, O3, PM2.5, and PM10 concentrations at some sites, aerosol optical depth, cloud condensation nuclei over ocean, cloud droplet number concentration (CDNC), cloud liquid and ice water path, and cloud optical thickness. Compared with the default Abdul-Razzack Ghan (2000) parameterization, simulations with the FN series produce ~107–113% higher CDNC, with half of the difference attributable to the higher aerosol activation fraction by the FN series and the remaining half due to feedbacks in subsequent cloud microphysical processes. With the higher CDNC, the FN series are more skillful in simulating cloud water path, cloud optical thickness, downward shortwave radiation
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
A parameterization of effective soil temperature for microwave emission
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Schmugge, T. J.; Mo, T. (Principal Investigator)
1981-01-01
A parameterization of effective soil temperature is discussed, which when multiplied by the emissivity gives the brightness temperature in terms of surface (T sub o) and deep (T sub infinity) soil temperatures as T = T sub infinity + C (T sub o - T sub infinity). A coherent radiative transfer model and a large data base of observed soil moisture and temperature profiles are used to calculate the best-fit value of the parameter C. For 2.8, 6.0, 11.0, 21.0 and 49.0 cm wavelengths. The C values are respectively 0.802 + or - 0.006, 0.667 + or - 0.008, 0.480 + or - 0.010, 0.246 + or - 0.009, and 0,084 + or - 0.005. The parameterized equation gives results which are generally within one or two percent of the exact values.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Parameterized neural networks for high-energy physics
NASA Astrophysics Data System (ADS)
Baldi, Pierre; Cranmer, Kyle; Faucett, Taylor; Sadowski, Peter; Whiteson, Daniel
2016-05-01
We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.
An intracloud lightning parameterization scheme for a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.
1992-01-01
The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.
Validation of parameterization scheme for eddy diffusion from satellite data
NASA Technical Reports Server (NTRS)
Sassi, F.; Visconti, G.; Gille, J. C.
1990-01-01
The eddy diffusion coefficient K(yy) has been calculated usign LIMS for the months of December 1978 and January and February 1979. Two methods have been used. The first implements the suggestion made by Tung (1987) to parameterize the eddy transport as a diffusive process along isentropes. The second method integrates the equation relating the parcel displacements to the eddy velocity fields. The latter method uses a filtering on both space and time domains to isolate transients and is referred to as the 'spectral method'. Results from the first method are shown to be reliable only for quiescent periods, breaking down when the meridional gradient of potential vorticity is negligible. Results from the two methods are in agreement only for very disturbed conditions, when transience is readily isolated. It is concluded that the parameterizations suggested for eddy transport and calculated in this paper may be meaningful for quiet periods, but are not reliable for unsteady and very large amplitude disturbances.
Parameterization of wind farms in COSMO-LM
NASA Astrophysics Data System (ADS)
Stuetz, E.; Steinfeld, G.; Heinemann, D.; Peinke, J.
2012-04-01
In order to examine the impact of wind farms in the meso scale using numerical simulations parameterizations of wind farms were implemented in a mesoscale model. In 2008/2009 the first wind farm in the german exclusive economic zone - Alpha Ventus - was built. Since then more wind farms are erected in the german exclusive economic zone. Wind farms with up to 80 wind turbines and on an area up to 66 square kilometers are planned - partly only few kilometers apart from one another. Such large wind farms influence the properties of the atmospheric boundary layer at the meso scale by a reduction of the wind speed, a enhancement of the turbulent kinetic energy, but also an alternation of the wind direction. Results of models for the calculation of wakes (wake models), idealistic mesoscale studies as well as observations show, that wind farms of this size produce wakes, which can expand up to a few 10 kilometers downstream. Mesoscale models provide the possibility to investigate the impact of such large wind farms on the atmospheric flow in a larger area and also to examine the effect of wind farms under different weather conditions. For the numerical simulation the mesoscale model COSMO-LM is used. Because the wind turbines of the wind farm cannot be displayed individually due to the large mesh-grid size, the effects of the wind turbine in a numerical model have to be described with the help of a parameterization. Different parameterizations, including the interpretation of a wind farm as enhanced surface roughness or as an impuls deficit and turbulence source, respectively, are implemented into COSMO. The impact of the different wind farm parameterizations on the simulation of the atmospheric boundary layer are presented. as well as first tests of idealistic simulations of wind farms are presented. For this purpose idealistic runs as well as a case study were performed.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Improved CART Data Products and 6cmm Parameterization for Clouds
Kenneth Sassen
2004-08-23
Reviewed here is the history of the participation in the Atmospheric Radiation Measurement (ARM) Program, with particular emphasis on research performed between 1999 and 2002, before the PI moved from the University of Utah to the University of Alaska, Fairbanks. The research results are divided into the following areas: IOP research, remote sensing algorithm development using datasets and models, cirrus cloud and SCM/GCM parameterizations, student training, and publications.
Parameterization for light ion production from electromagnetic dissociation
NASA Astrophysics Data System (ADS)
Norbury, John
2014-09-01
Light ion (hydrogen and helium isotopes) production from relativistic nucleus-nucleus collisions is important in space radiation protection problems, when galactic cosmic rays interact with spacecraft. In fact, for thick spacecraft shields, such as the International Space Station, light ion and neutron production can dominate the contribution to dose equivalent. Both strong and electromagnetic interactions can contribute to light ion production. The present work extends a previous parameterization of electromagnetically produced light ions, so that particle branching ratios are described more realistically.
Contribution to the cloud droplet effective radius parameterization
Pontikis, C.; Hicks, E. )
1992-11-01
An analytic cloud droplet effective radius expression is derived and validated by using field experiment microphysical data. This expression shows that the effective radius depends simultaneously upon the cloud liquid water content, droplet concentration and droplet spectral dispersion. It further suggests that the variability in these parameters present at all scales, due to turbulent mixing and secondary droplet activation, could limit the accuracy of the effective radius parameterizations used in climate models. 12 refs.
A parameterization of the depth of the entrainment zone
NASA Technical Reports Server (NTRS)
Boers, Reinout
1989-01-01
A theory of the parameterization of the entrainment zone depth has been developed based on conservation of energy. This theory suggests that the normalized entrainment zone depth is proportional to the inverse square root of the Richardson number. A comparison of this theory with atmospheric observations indicates excellent agreement. It does not adequately predict the laboratory data, although it improves on parcel theory, which is based on a momentum balance.
NASA Astrophysics Data System (ADS)
Savre, J.; Ekman, A. M. L.
2015-05-01
A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.
Optimizing EDMF parameterization for stratocumulus-topped boundary layer
NASA Astrophysics Data System (ADS)
Jones, C. R.; Bretherton, C. S.; Witek, M. L.; Suselj, K.
2014-12-01
We present progress in the development of an Eddy Diffusion / Mass Flux (EDMF) turbulence parameterization, with the goal of improving the representation of the cloudy boundary layer in NCEP's Global Forecast System (GFS), as part of a multi-institution Climate Process Team (CPT). Current GFS versions substantially under-predict cloud amount and cloud radiative impact over much of the globe, leading to large biases in the surface and top of atmosphere energy budgets. As part of the effort to correct these biases, the CPT is developing a new EDMF turbulence scheme for GFS, in which local turbulent mixing is represented by an eddy diffusion term while nonlocal shallow convection is represented by a mass flux term. The sum of both contributions provides the total turbulent flux. Our goal is for this scheme to more skillfully simulate cloud radiative properties without negatively impacting other measures of weather forecast skill. One particular challenge faced by an EDMF parameterization is to be able to handle stratocumulus regimes as well as shallow cumulus regimes. In order to isolate the behavior of the proposed EDMF parameterization and aid in its further development, we have implemented the scheme in a portable MATLAB single column model (SCM). We use this SCM framework to optimize the simulation of stratocumulus cloud top entrainment and boundary layer decoupling.
Parameterization of the influence of organic surfactants on aerosol activation
NASA Astrophysics Data System (ADS)
Abdul-Razzak, Hayder; Ghan, Steven J.
2004-02-01
Surface-active organic compounds, or surfactants, can affect aerosol activation by two mechanisms: lowering surface tension and altering the bulk hygroscopicity of the particles. A numerical model has been developed to predict the activation of aerosol particles consisting of an internally uniform chemical mixture of organic surfactants and inorganic salts in a parcel of air rising adiabatically at constant speed. Equations reflecting water balance of the air parcel were used together with a modified form of Köhler theory to model droplet nucleation while considering surface effects. We also extend a parametric representation of aerosol activation to the case of a mixture of inorganic salts and organic surfactants by modifying the Raoult term in Köhler theory (assuming additive behavior) and using a simplified relationship between surface tension and surfactant molar concentration to account for surface effects at the critical radius for activation. The close agreement (to within 10% for most and 20% for almost all conditions) between numerical and parametric results validates our modifications. Moreover, the form of the relationship is identical to an empirical relationship between surface tension and organic carbon concentration. Thus the modified form of the parameterization provides a framework that can account for the influence of observed organics on the activation of other salts. The modified form of the parameterization is tested successfully with the Po Valley model both for single aerosol size distribution and three-mode size distributions for marine, rural, and urban aerosols. Further measurements are required to extend the parameterization to other organic surfactants.
UQ-Guided Selection of Physical Parameterizations in Climate Models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Debusschere, B.; Ghan, S.; Rosa, D.; Bulaevskaya, V.; Anderson, G. J.; Chowdhary, K.; Qian, Y.; Lin, G.; Larson, V. E.; Zhang, G. J.; Randall, D. A.
2015-12-01
Given two or more parameterizations that represent the same physical process in a climate model, scientists are sometimes faced with difficult decisions about which scheme to choose for their simulations and analysis. These decisions are often based on subjective criteria, such as "which scheme is easier to use, is computationally less expensive, or produces results that look better?" Uncertainty quantification (UQ) and model selection methods can be used to objectively rank the performance of different physical parameterizations by increasing the preference for schemes that fit observational data better, while at the same time penalizing schemes that are overly complex or have excessive degrees-of-freedom. Following these principles, we are developing a perturbed-parameter UQ framework to assist in the selection of parameterizations for a climate model. Preliminary results will be presented on the application of the framework to assess the performance of two alternate schemes for simulating tropical deep convection (CLUBB-SILHS and ZM-trigmem) in the U.S. Dept. of Energy's ACME climate model. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, is supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC), and is released as LLNL-ABS-675799.
Parameterization of effective ice particle size for high-latitude clouds
NASA Astrophysics Data System (ADS)
Boudala, Faisal S.; Isaac, George A.; Fu, Qiang; Cober, Stewart G.
2002-08-01
A parameterization has been developed for mean effective size Dge in terms of ice water content (IWC) and temperature using in situ measurements of ice crystal spectra, cloud particle shapes and particle cross-sectional area A from four research projects conducted in latitudes north of 45° N. The cloud microphysical measurements were made using PMS 2D optical probes, a PMS forward scattering spectrometer probe (FSSP), and Nevzorov total water and liquid water content probes. The IWCs derived from particle spectra using three different methods were compared with IWC measured with the Nevzorov probe (IWCNev). The contribution of small particles to the total mass was estimated by integrating a gamma distribution function that was fitted to match the measured FSSP concentrations. The Dge was calculated from the derived IWC and total cross-sectional area per unit volume Ac. This analysis indicates that there are significant differences among the schemes used to derive the IWC. It was found that the IWC derived based on the Cunningham scheme and IWCNev have the highest correlation: r2 = 0.78. After considering small particles, the derived IWC almost matched the IWCNev. The average estimated contribution of small particles to the Ac was 43%. The average estimated contribution of small particles to the total IWC, however, was 20%. Since Dge is directly proportional to the ratio IWC/Ac, the addition of small particles reduced the derived Dge considerably. The largest changes in Dge associated with small particles, however, occur at the coldest temperature and at low IWC, reaching up to 45% for temperatures less than -25° C. Generally, Dge and IWC increase with increasing temperature. Good agreement between the parameterized Dge and derived Dge from measurements were found when small particles were included.
... will be on your side, just below the ribs or right over the lowest ribs. Muscle, fat, and tissue are cut and moved. Your surgeon may need to remove a rib to do the procedure. The tube that carries ...
... disorders of blood cells, such as idiopathic thrombocytopenia purpura (ITP), hereditary spherocytosis , thalassemia, hemolytic anemia , and hereditary ... spherocytic anemia Hemolytic anemia Hodgkin lymphoma Idiopathic thrombocytopenic purpura (ITP) Patient Instructions Spleen removal - child - discharge Spleen ...
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Stone, Peter H.
1987-01-01
The moist convection parameterization used in the GISS 3-D GCM is adapted for use in a two-dimensional (2-D) zonally averaged statistical-dynamical model. Experiments with different versions of the parameterization show that its impact on the general circulation in the 2-D model does not parallel its impact in the 3-D model unless the effect of zonal variations is parameterized in the moist convection calculations. A parameterization of the variations in moist static energy is introduced in which the temperature variations are calculated from baroclinic stability theory, and the relative humidity is assumed to be constant. Inclusion of the zonal variations of moist static energy in the 2-D moist convection parameterization allows just a fraction of a latitude circle to be unstable and enhances the amount of deep convection. This leads to a 2-D simulation of the general circulation very similar to that in the 3-D model. The experiments show that the general circulation is sensitive to the parameterized amount of deep convection in the subsident branch of the Hadley cell. The more there is, the weaker are the Hadley cell circulations and the westerly jets. The experiments also confirm the effects of momentum mixing associated with moist convection found by earlier investigators and, in addition, show that the momentum mixing weakens the Ferrel cell. An experiment in which the moist convection was removed while the hydrological cycle was retained and the eddy forcing was held fixed shows that moist convection by itself stabilizes the tropics, reduces the Hadley circulation, and reduces the maximum speeds in the westerly jets.
A parameterization method and application in breast tomosynthesis dosimetry
Li, Xinhua; Zhang, Da; Liu, Bob
2013-09-15
Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA
Scattering in Quantum Lattice Gases
NASA Astrophysics Data System (ADS)
O'Hara, Andrew; Love, Peter
2009-03-01
Quantum Lattice Gas Automata (QLGA) are of interest for their use in simulating quantum mechanics on both classical and quantum computers. QLGAs are an extension of classical Lattice Gas Automata where the constraint of unitary evolution is added. In the late 1990s, David A. Meyer as well as Bruce Boghosian and Washington Taylor produced similar models of QLGAs. We start by presenting a unified version of these models and study them from the point of view of the physics of wave-packet scattering. We show that the Meyer and Boghosian-Taylor models are actually the same basic model with slightly different parameterizations and limits. We then implement these models computationally using the Python programming language and show that QLGAs are able to replicate the analytic results of quantum mechanics (for example reflected and transmitted amplitudes for step potentials and the Klein paradox).
Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe
2011-01-01
Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal. PMID:21865802
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in
Sensitivity analysis of volume scattering phase functions.
Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael
2016-08-01
To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m^{-3}. PMID:27505819
Modeling the clouds on Venus: model development and improvement of a nucleation parameterization
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Bekki, Slimane; Vehkamäki, Hanna; Julin, Jan; Montmessin, Franck; Ortega, Ismael K.; Lebonnois, Sébastien
2014-05-01
As both the clouds of Venus and aerosols in the Earth's stratosphere are composed of sulfuric acid droplets, we use the 1-D version of a model [1,4] developed for stratospheric aerosols and clouds to study the clouds on Venus. We have removed processes and compounds related to the stratospheric clouds so that the only species remaining are water and sulfuric acid, corresponding to the stratospheric sulfate aerosols, and we have added some key processes. The model describes microphysical processes including condensation/evaporation, and sedimentation. Coagulation, turbulent diffusion, and a parameterization for two-component nucleation [8] of water and sulfuric acid have been added in the model. Since the model describes explicitly the size distribution with a large number of size bins (50-500), it can handle multiple particle modes. The validity ranges of the existing nucleation parameterization [7] have been improved to cover a larger temperature range, and the very low relative humidity (RH) and high sulfuric acid concentrations found in the atmosphere of Venus. We have made several modifications to improve the 2002 nucleation parameterization [7], most notably ensuring that the two-component nucleation model behaves as predicted by the analytical studies at the one-component limit reached at extremely low RH. We have also chosen to use a self-consistent cluster distribution [9], constrained by scaling it to recent quantum chemistry calculations [3]. First tests of the cloud model have been carried out with temperature profiles from VIRA [2] and from the LMD Venus GCM [5], and with a compilation of water vapor and sulfuric acid profiles, as in [6]. The temperature and pressure profiles do not evolve with time, but the vapour profiles naturally change with the cloud. However, no chemistry is included for the moment, so the vapor concentrations are only dependent on the microphysical processes. The model has been run for several hundreds of Earth days to reach a
Liou, K. N.; Takano, Y.; He, Cenlin; Yang, P.; Leung, Lai-Yung R.; Gu, Y.; Lee, W- L.
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-01-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the ‘kinome’ at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model’s two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed. PMID:27601856
Parameterization of the Meridional Eddy Heat and Momentum Fluxes.
NASA Astrophysics Data System (ADS)
Zou, Cheng-Zhi; Gal-Chen, Tzvi
1999-06-01
Green's eddy diffusive transfer representation is used to parameterize the meridional eddy heat flux. The structural function obtained by Branscome for the diagonal component Kyy in the tensor of the transfer coefficients is adopted. A least squares method that uses the observed data of eddy heat flux is proposed to evaluate the magnitude of Kyy and the structure of the nondiagonal component Kyz in the transfer coefficient tensor. The optimum motion characteristic at the steering level is used as a constraint for the relationship between Kyy and Kyz. The obtained magnitude of Kyy is two to three times larger than that of the Branscome's, which is obtained in a linear analysis with the assumption of Kyz = 0.Green's vertically integrated expression for the meridional eddy momentum flux is used to test the coefficients obtained in the eddy heat flux. In this parameterization, the eddy momentum flux is related to the eddy fluxes of two conserved quantities: potential vorticity and potential temperature. The transfer coefficient is taken to be the sum of that obtained in the parameterization of eddy heat flux, plus a correction term suggested by Stone and Yao, which ensures the global net eddy momentum transport to be zero. What makes the present method attractive is that, even though only the data of eddy heat flux are used to evaluate the magnitude of the transfer coefficients, the obtained magnitude of the eddy momentum flux is in good agreement with observations. For the annual mean calculation, the obtained peak values of eddy momentum flux are 94% of the observation for the Northern Hemisphere and 101% for the Southern Hemisphere. This result significantly improves the result of Stone and Yao, who obtained 34% for the Northern Hemisphere and 16% for the Southern Hemisphere in a similar calculation, but in which Kyz = 0 was assumed.
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Numerical simulations of snowfall events: Sensitivity analysis of physical parameterizations
NASA Astrophysics Data System (ADS)
Fernández-González, S.; Valero, F.; Sánchez, J. L.; Gascón, E.; López, L.; García-Ortega, E.; Merino, A.
2015-10-01
Accurate estimation of snowfall episodes several hours or even days in advance is essential to minimize risks to transport and other human activities. Every year, these episodes cause severe traffic problems on the northwestern Iberian Peninsula. In order to analyze the influence of different parameterization schemes, 15 snowfall days were analyzed with the Weather Research and Forecasting (WRF) model, defining three nested domains with resolutions of 27, 9, and 3 km. We implemented four microphysical parameterizations (WRF Single-Moment 6-class scheme, Goddard, Thompson, and Morrison) and two planetary boundary layer schemes (Yonsei University and Mellor-Yamada-Janjic), yielding eight distinct combinations. To validate model estimates, a network of 97 precipitation gauges was used, together with dichotomous data of snowfall presence/absence from snowplow requests to the emergency service of Spain and observatories of the Spanish Meteorological Agency. The results indicate that the most accurate setting of WRF for the study area was that using the Thompson microphysical parameterization and Mellor-Yamada-Janjic scheme, although the Thompson and Yonsei University combination had greater accuracy in determining the temporal distribution of precipitation over 1 day. Combining the eight deterministic members in an ensemble average improved results considerably. Further, the root mean square difference decreased markedly using a multiple linear regression as postprocessing. In addition, our method was able to provide mean ensemble precipitation and maximum expected precipitation,which can be very useful in the management of water resources. Finally, we developed an application that allows determination of the risk of snowfall above a certain threshold.
CCPP-ARM Parameterization Testbed Model Forecast Data
Klein, Stephen
2008-01-15
Dataset contains the NCAR CAM3 (Collins et al., 2004) and GFDL AM2 (GFDL GAMDT, 2004) forecast data at locations close to the ARM research sites. These data are generated from a series of multi-day forecasts in which both CAM3 and AM2 are initialized at 00Z every day with the ECMWF reanalysis data (ERA-40), for the year 1997 and 2000 and initialized with both the NASA DAO Reanalyses and the NCEP GDAS data for the year 2004. The DOE CCPP-ARM Parameterization Testbed (CAPT) project assesses climate models using numerical weather prediction techniques in conjunction with high quality field measurements (e.g. ARM data).
Modeling and parameterization of horizontally inhomogeneous cloud radiative properties
NASA Technical Reports Server (NTRS)
Welch, R. M.
1995-01-01
One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.
Ricci Flow-based Spherical Parameterization and Surface Registration.
Chen, X; He, H; Zou, G; Zhang, X; Gu, X; Hua, J
2013-09-01
This paper presents an improved Euclidean Ricci flow method for spherical parameterization. We subsequently invent a scale space processing built upon Ricci energy to extract robust surface features for accurate surface registration. Since our method is based on the proposed Euclidean Ricci flow, it inherits the properties of Ricci flow such as conformality, robustness and intrinsicalness, facilitating efficient and effective surface mapping. Compared with other surface registration methods using curvature or sulci pattern, our method demonstrates a significant improvement for surface registration. In addition, Ricci energy can capture local differences for surface analysis as shown in the experiments and applications. PMID:24019739
Improving bulk microphysics parameterizations in simulations of aerosol effects
NASA Astrophysics Data System (ADS)
Wang, Yuan; Fan, Jiwen; Zhang, Renyi; Leung, L. Ruby; Franklin, Charmaine
2013-06-01
To improve the microphysical parameterizations for simulations of the aerosol effects in regional and global climate models, the Morrison double-moment bulk microphysical scheme presently implemented in the Weather Research and Forecasting model is modified by replacing the prescribed aerosols in the original bulk scheme (Bulk-OR) with a prognostic double-moment aerosol representation to predict both aerosol number concentration and mass mixing ratio (Bulk-2M). Sensitivity modeling experiments are performed for two distinct cloud regimes: maritime warm stratocumulus clouds (Sc) over southeast Pacific Ocean from the VOCALS project and continental deep convective clouds in the southeast of China. The results from Bulk-OR and Bulk-2M are compared against atmospheric observations and simulations produced by a spectral bin microphysical scheme (SBM). The prescribed aerosol approach (Bulk-OR) produces unreliable aerosol and cloud properties throughout the simulation period, when compared to the results from those using Bulk-2M and SBM, although all of the model simulations are initiated by the same initial aerosol concentration on the basis of the field observations. The impacts of the parameterizations of diffusional growth and autoconversion of cloud droplets and the selection of the embryonic raindrop radius on the performance of the bulk microphysical scheme are also evaluated by comparing the results from the modified Bulk-2M with those from SBM simulations. Sensitivity experiments using four different types of autoconversion schemes reveal that the autoconversion parameterization is crucial in determining the raindrop number, mass concentration, and drizzle formation for warm stratocumulus clouds. An embryonic raindrop size of 40 µm is determined as a more realistic setting in the autoconversion parameterization. The saturation adjustment employed in calculating condensation/evaporation in the bulk scheme is identified as the main factor responsible for the large
Longwave radiation parameterization for UCLA/GLAS GCM
NASA Astrophysics Data System (ADS)
Harshvardhan; Corsetti, T.
1984-03-01
This document describes the parameterization of longwave radiation in the UCLA/GLAS general circulation model. Transmittances have been computed from the work of Arking and Chou for water vapor and carbon dioxide and ozone absorptances are computed using a formula due to Rodgers. Cloudiness has been introduced into the code in a manner in which fractional cover and random or maximal overlap can be accommodated. The entire code has been written in a form that is amenable to vectorization on CYBER and CRAY computers. Sample clear sky computations for five standard profiles using the 15- and 9-level versions of the model have been included.
The causal structure of spacetime is a parameterized Randers geometry
NASA Astrophysics Data System (ADS)
Skakala, Jozef; Visser, Matt
2011-03-01
There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries—these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes—the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.
Improving Bulk Microphysics Parameterizations in Simulations of Aerosol Effects
Wang, Yuan; Fan, Jiwen; Zhang, Renyi; Leung, Lai-Yung R.; Franklin, Charmaine N.
2013-06-05
To improve the microphysical parameterizations for simulations of the aerosol indirect effect (AIE) in regional and global climate models, a double-moment bulk microphysical scheme presently implemented in the Weather Research and Forecasting (WRF) model is modified and the results are compared against atmospheric observations and simulations produced by a spectral bin microphysical scheme (SBM). Rather than using prescribed aerosols as in the original bulk scheme (Bulk-OR), a prognostic doublemoment aerosol representation is introduced to predict both the aerosol number concentration and mass mixing ratio (Bulk-2M). The impacts of the parameterizations of diffusional growth and autoconversion and the selection of the embryonic raindrop radius on the performance of the bulk microphysical scheme are also evaluated. Sensitivity modeling experiments are performed for two distinct cloud regimes, maritime warm stratocumulus clouds (SC) over southeast Pacific Ocean from the VOCALS project and continental deep convective clouds (DCC) in the southeast of China from the Department of Energy/ARM Mobile Facility (DOE/AMF) - China field campaign. The results from Bulk-2M exhibit a much better agreement in the cloud number concentration and effective droplet radius in both the SC and DCC cases with those from SBM and field measurements than those from Bulk-OR. In the SC case particularly, Bulk-2M reproduces the observed drizzle precipitation, which is largely inhibited in Bulk-OR. Bulk-2M predicts enhanced precipitation and invigorated convection with increased aerosol loading in the DCC case, consistent with the SBM simulation, while Bulk-OR predicts the opposite behaviors. Sensitivity experiments using four different types of autoconversion schemes reveal that the autoconversion parameterization is crucial in determining the raindrop number, mass concentration, and drizzle formation for warm 2 stratocumulus clouds. An embryonic raindrop size of 40 μm is determined as a more
Parameterization of interatomic potential by genetic algorithms: A case study
Ghosh, Partha S. Arya, A.; Dey, G. K.; Ranawat, Y. S.
2015-06-24
A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.
NASA Astrophysics Data System (ADS)
Hall, Carlton Raden
A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf
Presentation covered five topics; arsenic chemistry, best available technology (BAT), surface water technology, ground water technology and case studies of arsenic removal. The discussion on arsenic chemistry focused on the need and method of speciation for AsIII and AsV. BAT me...
... remove a splinter, first wash your hands with soap and water. Use tweezers to grab the splinter. Carefully pull it out at the same angle it went in. If the splinter is under the skin or hard to grab: Sterilize a pin or needle by ...
When EPA sets a regulation ( a maxim contaminant level) for a contaminant, it must also specify the "best available technology" (BAT) that can be used to remove the contaminant. ecause the regulations apply to community water systems, the technologies selected are ones that are c...
Locally isometric and conformal parameterization of image manifold
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Kuleshov, A. P.; Yanovich, Yu. A.
2015-12-01
Images can be represented as vectors in a high-dimensional Image space with components specifying light intensities at image pixels. To avoid the `curse of dimensionality', the original high-dimensional image data are transformed into their lower-dimensional features preserving certain subject-driven data properties. These properties can include `information-preserving' when using the constructed low-dimensional features instead of original high-dimensional vectors, as well preserving the distances and angles between the original high-dimensional image vectors. Under the commonly used Manifold assumption that the high-dimensional image data lie on or near a certain unknown low-dimensional Image manifold embedded in an ambient high-dimensional `observation' space, a constructing of the lower-dimensional features consists in constructing an Embedding mapping from the Image manifold to Feature space, which, in turn, determines a low-dimensional parameterization of the Image manifold. We propose a new geometrically motivated Embedding method which constructs a low-dimensional parameterization of the Image manifold and provides the information-preserving property as well as the locally isometric and conformal properties.
Evaluation of six parameterization approaches for the ground heat flux
NASA Astrophysics Data System (ADS)
Liebethal, C.; Foken, T.
2007-01-01
There are numerous approaches to the parameterization of the ground heat flux that use different input data, are valid for different times of the day, and deliver results of different quality. Six of these approaches are tested in this study: three approaches calculating the ground heat flux from net radiation, one approach using the turbulent sensible heat flux, one simplified in situ measurement approach, and the force-restore method. On the basis of a data set recorded during the LITFASS-2003 experiment, the strengths and weaknesses of the approaches are assessed. The quality of the best approaches (simplified measurement and force-restore) approximates that of the measured data set. An approach calculating the ground heat flux from net radiation and the diurnal amplitude of the soil surface temperature also delivers satisfactory daytime results. The remaining approaches all have such serious drawbacks that they should only be applied with care. Altogether, this study demonstrates that ground heat flux parameterization has the potential to produce results matching measured ones very well, if all conditions and restrictions of the respective approaches are taken into account.
Contrail Cirrus Parameterization in the UK Met Office Climate Model
NASA Astrophysics Data System (ADS)
Rap, A.; Forster, P.; Dobbie, S.
2011-12-01
Air travel and its associated emissions are growing faster than other sectors and they are predicted to contribute a significant warming of climate over the coming century. According to current best estimates, the largest single radiative forcing component associated with aviation is due to aviation-induced cloudiness (AIC), which includes contrail cirrus and changes in the natural cirrus caused by air traffic. However, there is still a high level of uncertainty associated with these, and limited estimates for the forcing of the total effect of aviation induced cloudiness exist. This study, as part of the Contrails Spreading into Cirrus (COSIC) project, aimed to build a physically based parameterization of contrails spreading into cirrus within the UK Met Office Unified Model (UM) and thus to give an independent estimate of the climate impact of AIC. In-situ observations of contrails properties and their spreading have been performed during a series of flights with the UK Facility for Airborne Atmospheric Measurements (FAAM) BAe-146 aircraft. These observations were used in the development of the parameterization, which simulates contrail formation and ageing interactively with the natural cirrus module within the UM. Based on this new parameterisation, estimates of global contrail cirrus coverage, optical depth, and radiative forcing are given, investigating also the contrail effect on the natural cirrus cloud and contrail saturation regional effects of future air traffic growth.
The Reduced RUM as a Logit Model: Parameterization and Constraints.
Chiu, Chia-Yi; Köhn, Hans-Friedrich
2016-06-01
Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided. PMID:25838247
Comparison of surface radiative flux parameterizations. Part II. Shortwave radiation
NASA Astrophysics Data System (ADS)
Niemelä, Sami; Räisänen, Petri; Savijärvi, Hannu
This paper presents a comparison of several shortwave (SW) downwelling radiative flux parameterizations with hourly averaged pointwise surface radiation observations made at Jokioinen and Sodankylä, Finland, in 1997. Both clear and cloudy conditions are considered. The clear-sky comparisons included six simple SW parameterizations, which use screen level input data, and three radiation schemes from numerical weather prediction (NWP) models: the former European Centre for Medium-Range Weather Forecast (ECMWF) scheme, the Deutscher Wetterdienst (DWD) scheme, and the High Resolution Limited Area Model (HIRLAM) scheme. Atmospheric-sounding profiles were used as input for the NWP schemes. For the cases with clouds, three simple cloud correction methods (mainly dependent on the total cloud cover) were tested. In the SW clear-sky comparisons, the relatively simple scheme by Iqbal provided the best results, surprisingly outperforming even the NWP radiation models. Simple cloud corrections performed poorly in the SW region. Out of these schemes, a new cloud correction method developed using the present data provided the best results.
Evaluation of a New Parameterization for Fair-Weather Cumulus
Berg, Larry K.; Stull, Roland B.
2006-05-25
A new parameterization for boundary layer cumulus clouds, called the cumulus potential (CuP) scheme, is introduced. This scheme uses joint probability density functions (JPDFs) of virtual potential temperature and water-vapor mixing ratio, as well as the mean vertical profiles of virtual potential temperature, to predict the amount and size distribution of boundary layer cloud cover. This model considers the diversity of air parcels over a heterogeneous surface, and recognizes that some parcels rise above their lifting condensation level to become cumulus, while other parcels might rise as clear updrafts. This model has several unique features: 1) surface heterogeneity is represented using the boundary layer JPDF of virtual potential temperature versus water-vapor mixing ratio, 2) clear and cloudy thermals are allowed to coexist at the same altitude, and 3) a range of cloud-base heights, cloud-top heights, and cloud thicknesses are predicted within any one cloud field, as observed. Using data from Boundary Layer Experiment 1996 and a model intercomparsion study using large eddy simulation (LES) based on Barbados Oceanographic and Meteorological Experiment (BOMEX), it is shown that the CuP model does a good job predicting cloud-base height and cloud-top height. The model also shows promise in predicting cloud cover, and is found to give better cloud-cover estimates than three other cumulus parameterizations: one based on relative humidity, a statistical scheme based on the saturation deficit, and a slab model.
Transient Storage Parameterization of Wetland-dominated Stream Reaches
NASA Astrophysics Data System (ADS)
Wilderotter, S. M.; Lightbody, A.; Kalnejais, L. H.; Wollheim, W. M.
2014-12-01
Current understanding of the importance of transient storage in fluvial wetlands is limited. Wetlands that have higher connectivity to the main stream channel are important because they have the potential to retain more nitrogen within the river system than wetlands that receive little direct stream discharge. In this study, we investigated how stream water accesses adjacent fluvial wetlands in New England coastal watersheds to improve parameterization in network-scale models. Break through curves of Rhodamine WT were collected for eight wetlands in the Ipswich and Parker (MA) and Lamprey River (NH) watersheds, USA. The curves were inverse modeled using STAMMT-L to optimize the connectivity and size parameters for each reach. Two approaches were tested, a single dominant storage zone and a range of storage zones represented using a power-law distribution of storage zone connectivity. Multiple linear regression analyses were conducted to relate transient storage parameters to stream discharge, area, length-to-width ratio, and reach slope. Resulting regressions will enable more accurate parameterization of surface water transient storage in network-scale models.
Evaluation of an Urban Canopy Parameterization in a Mesoscale Model
Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J
2004-03-18
A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.
Observational Study and Parameterization of Aerosol-fog Interactions
NASA Astrophysics Data System (ADS)
Duan, J.; Guo, X.; Liu, Y.; Fang, C.; Su, Z.; Chen, Y.
2014-12-01
Studies have shown that human activities such as increased aerosols affect fog occurrence and properties significantly, and accurate numerical fog forecasting depends on, to a large extent, parameterization of fog microphysics and aerosol-fog interactions. Furthermore, fogs can be considered as clouds near the ground, and enjoy an advantage of permitting comprehensive long-term in-situ measurements that clouds do not. Knowledge learned from studying aerosol-fog interactions will provide useful insights into aerosol-cloud interactions. To serve the twofold objectives of understanding and improving parameterizations of aerosol-fog interactions and aerosol-cloud interactions, this study examines the data collected from fogs, with a focus but not limited to the data collected in Beijing, China. Data examined include aerosol particle size distributions measured by a Passive Cavity Aerosol Spectrometer Probe (PCASP-100X), fog droplet size distributions measured by a Fog Monitor (FM-120), Cloud Condensation Nuclei (CCN), liquid water path measured by radiometers and visibility sensors, along with meteorological variables measured by a Tethered Balloon Sounding System (XLS-Ⅱ) and Automatic Weather Station (AWS). The results will be compared with low-level clouds for similarities and differences between fogs and clouds.
A Parameterization for the Triggering of Landscape Generated Moist Convection
NASA Technical Reports Server (NTRS)
Lynn, Barry H.; Tao, Wei-Kuo; Abramopoulos, Frank
1998-01-01
A set of relatively high resolution three-dimensional (3D) simulations were produced to investigate the triggering of moist convection by landscape generated mesoscale circulations. The local accumulated rainfall varied monotonically (linearly) with the size of individual landscape patches, demonstrating the need to develop a trigger function that is sensitive to the size of individual patches. A new triggering function that includes the effect of landscapes generated mesoscale circulations over patches of different sizes consists of a parcel's perturbation in vertical velocity (nu(sub 0)), temperature (theta(sub 0)), and moisture (q(sub 0)). Each variable in the triggering function was also sensitive to soil moisture gradients, atmospheric initial conditions, and moist processes. The parcel's vertical velocity, temperature, and moisture perturbation were partitioned into mesoscale and turbulent components. Budget equations were derived for theta(sub 0) and q(sub 0). Of the many terms in this set of budget equations, the turbulent, vertical flux of the mesoscale temperature and moisture contributed most to the triggering of moist convection through the impact of these fluxes on the parcel's temperature and moisture profile. These fluxes needed to be parameterized to obtain theta(sub 0) and q(sub 0). The mesoscale vertical velocity also affected the profile of nu(sub 0). We used similarity theory to parameterize these fluxes as well as the parcel's mesoscale vertical velocity.
Convection Parameterization and Double ITCZ in NCAR CCSM3
NASA Astrophysics Data System (ADS)
Zhang, G. J.; Wang, H.
2006-05-01
The appearance of a spurious Inter-Tropical Convergence Zone south of the equator in the eastern and central equatorial Pacific, in addition to the observed one north of the equator, is a common problem in coupled global climate models. Previous theoretical and modeling studies suggest that convection parameterization and the unrealistic simulation of the stratus clouds off Peru are two of the factors that can lead to double ITCZ. The present study investigates this double ITCZ problem in the NCAR CCSM3. It shows that use of a modified convection scheme significantly mitigates the double ITCZ problem in boreal summer. This has a profound impact on the simulated sea surface temperature through cloud radiative forcing feedback. Both the warm bias in the southern ITCZ region and the cold bias in the cold tongue over the equator are reduced. Examination of time series of precipitation, SST and surface energy fluxes shows that depending on the convection parameterization used, double or single ITCZ emerges quickly within the first few months after the model start.
Sensitivity of liquid clouds to homogenous freezing parameterizations
Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas
2015-01-01
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at −40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as −30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>−35°C) and low (<−38°C) temperatures. Key Points Homogeneous freezing may be significant as warm as −30°C Homogeneous freezing should not be represented by a threshold approximation There is a need for an improved parameterization of homogeneous ice nucleation PMID:26074652
Parameterizing Ocean Eddy Transports From Surface to Bottom
NASA Astrophysics Data System (ADS)
Aiki, H.; Jacobson, T.; Yamagata, T.
2004-12-01
To improve subgrid-scale physics of climate ocean models, in particular near the top and bottom boundaries, we consider new parameterization schemes for the extra transport velocity by waves and eddies in baroclinic instability. These come in the form of elliptic equations, previously unmentioned, which we derive for the eddy-induced overturning stream function. They guarantee decrease of the mean field potential energy. Our principal example gives a relationship between the vertical shear of the overturning velocity and the buoyancy torque of the main geostrophic current. Interestingly the parameterized velocity is nonsingular at the bottom and the sea surface, contrasting with the constant-coefficient Gent and McWilliams (1990)scheme. Idealized two-dimensional numerical experiments uccessfully reproduce meridional overturning circulation even when the background density gradient is uniform everywhere (the Eady problem) or when the bottom is steeply sloped. We further demonstrate that adding an eddy form drag (wave tress) term in the TRM momentum equations yields overturning of the velocity field.
Parameterization of Vegetation Aerodynamic Roughness of Natural Regions Satellite Imagery
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard; Stewart, Pamela
1998-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. The parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
NASA Astrophysics Data System (ADS)
Yang, Z.
2011-12-01
Noah-MP, which improves over the standard Noah land surface model, is unique among all land surface models in that it has multi-parameterization options (hence Noah-MP), capable of producing thousands of parameterization schemes, in addition to its improved physical realism (multi-layer snowpack, groundwater dynamics, and vegetation dynamics). All these features are critical for ensemble hydrological simulations and climate predictions at intraseasonal to decadal timescales. This talk will focus on evaluation of the Noah-MP simulations of energy, water and carbon balances for different sub-basins in the Mississippi River in comparison with various observations. The analysis is performed on daily and monthly scales spanning from January 2000 to December 2009. We will show how different runoff schemes in Noah-MP affect the scatter patterns between runoff and water table depth and between gross primary productivity and total water storage change, a type of analysis that would help us identify the relationships between key water storage terms (groundwater, soil moisture, snow) and fluxes (GPP, sensible heat, evapotranspiration, runoff). Similarly, we want to see how other options affect the patterns, such as the beta parameter (i.e. the soil moisture parameter controlling transpiration of plants), the Ball-Berry and Jarvis options for stomatal resistance, and the dynamic vegetation options (on or off). We will compare the water storage simulations from Noah-MP, observations and other model estimates, which would help determine the strengths and limitations of the Noah-MP groundwater and hydrological schemes.
A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes
J. O. Pinto, A.H. Lynch
2005-12-14
The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles.
eblur/dust: a modular python approach for dust extinction and scattering
NASA Astrophysics Data System (ADS)
Corrales, Lia
2016-03-01
I will present a library of python codes -- github.com/eblur/dust -- which calculate dust scattering and extinction properties from the IR to the X-ray. The modular interface allows for custom defined dust grain size distributions, optical constants, and scattering physics. These codes are currently undergoing a major overhaul to include multiple scattering effects, parallel processing, parameterized grain size distributions beyond power law, and optical constants for different grain compositions. I use eblur/dust primarily to study dust scattering images in the X-ray, but they may be extended to applications at other wavelengths.
Properties of a parameterization of radon projection by the reconstruction on circular disc
NASA Astrophysics Data System (ADS)
Tischenko, O.; Schegerer, A.; Xu, Y.; Hoeschen, C.
2010-04-01
An angular parameterization of parallel Radon projections referred to in this paper as ψ-parameterization is discussed in relevance to the efficiency of reconstruction from fan data. The fact that the ψ-parameterization coincides with the equiangular fan beam parameterization allows us to develop a simple and efficient approach useful for the reconstruction from fan data. Within this approach parallel projections are approximated by groups of semi-parallel rays. The reconstruction is carried out directly, i.e. without any modification of original data, at the speed which is comparable or even higher than that of the parallel Filtered Back Projection (FBP) algorithm.
NASA Astrophysics Data System (ADS)
Shi, X.; Liu, X.; Zhang, K.
2014-07-01
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmospheric Model version 5.3 (CAM5.3), the effects of preexisting ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of cirrus cloud rather than in the whole area of cirrus cloud. With these improvements, the two unphysical limiters used in the representation of ice nucleation are removed. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The preexisting ice crystals significantly reduce ice number concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and preexisting ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 106 m-2) is obviously less than that from the LP (8.46 × 106 m-2) and BN (5.62 × 106 m-2) parameterizations. As a result, experiment using the KL parameterization predicts a much smaller anthropogenic aerosol longwave indirect forcing (0.24 W m-2) than that using the LP (0.46 W
Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai
2015-01-01
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmospheric Model version 5.3 (CAM5.3), the effects of preexisting ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of cirrus cloud rather than in the whole area of cirrus cloud. With these improvements, the two unphysical limiters used in the representation of ice nucleation are removed. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The preexisting ice crystals significantly reduce ice number concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably.Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and preexisting ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24×106 m-2) is obviously less than that from the LP (8.46×106 m-2) and BN (5.62×106 m-2) parameterizations. As a result, experiment using the KL parameterization predicts a much smaller anthropogenic aerosol longwave indirect forcing (0.24 W m-2) than that using the LP (0.46 W m-2
New particle dependant parameterizations of heterogeneous freezing processes.
NASA Astrophysics Data System (ADS)
Diehl, Karoline; Mitra, Subir K.
2014-05-01
For detailed investigations of cloud microphysical processes an adiabatic air parcel model with entrainment is used. It represents a spectral bin model which explicitly solves the microphysical equations. The initiation of the ice phase is parameterized and describes the effects of different types of ice nuclei (mineral dust, soot, biological particles) in immersion, contact, and deposition modes. As part of the research group INUIT (Ice Nuclei research UnIT), existing parameterizations have been modified for the present studies and new parameterizations have been developed mainly on the basis of the outcome of INUIT experiments. Deposition freezing in the model is dependant on the presence of dry particles and on ice supersaturation. The description of contact freezing combines the collision kernel of dry particles with the fraction of frozen drops as function of temperature and particle size. A new parameterization of immersion freezing has been coupled to the mass of insoluble particles contained in the drops using measured numbers of ice active sites per unit mass. Sensitivity studies have been performed with a convective temperature and dew point profile and with two dry aerosol particle number size distributions. Single and coupled freezing processes are studied with different types of ice nuclei (e.g., bacteria, illite, kaolinite, feldspar). The strength of convection is varied so that the simulated cloud reaches different levels of temperature. As a parameter to evaluate the results the ice water fraction is selected which is defined as the relation of the ice water content to the total water content. Ice water fractions between 0.1 and 0.9 represent mixed-phase clouds, larger than 0.9 ice clouds. The results indicate the sensitive parameters for the formation of mixed-phase and ice clouds are: 1. broad particle number size distribution with high number of small particles, 2. temperatures below -25°C, 3. specific mineral dust particles as ice nuclei such
FSP (Full Space Parameterization), Version 2.0
Fries, G.A.; Hacker, C.J.; Pin, F.G.
1995-10-01
This paper describes the modifications made to FSPv1.0 for the Full Space Parameterization (FSP) method, a new analytical method used to resolve underspecified systems of algebraic equations. The optimized code recursively searches for the necessary number of linearly independent vectors that are necessary to form the solution space. While doing this, it ensures that all possible combinations of solutions are checked, if needed, and handles complications which arise due to particular cases. In addition, two particular cases which cause failure of the FSP algorithm were discovered during testing of this new code. These cases are described in the context of how they are recognized and how they are handled by the new code. Finally, testing was performed on the new code using both isolated movements and complex trajectories for various mobile manipulators.
Parameterized modeling and estimation of spatially varying optical blur
NASA Astrophysics Data System (ADS)
Simpkins, Jonathan D.; Stevenson, Robert L.
2015-02-01
Optical blur can display significant spatial variation across the image plane, even for constant camera settings and object depth. Existing solutions to represent this spatially varying blur requires a dense sampling of blur kernels across the image, where each kernel is defined independent of the neighboring kernels. This approach requires a large amount of data collection, and the estimation of the kernels is not as robust as if it were possible to incorporate knowledge of the relationship between adjacent kernels. A novel parameterized model is presented which relates the blur kernels at different locations across the image plane. The model is motivated by well-established optical models, including the Seidel aberration model. It is demonstrated that the proposed model can unify a set of hundreds of blur kernel observations across the image plane under a single 10-parameter model, and the accuracy of the model is demonstrated with simulations and measurement data collected by two separate research groups.
Parameterization of ion channeling half-angles and minimum yields
NASA Astrophysics Data System (ADS)
Doyle, Barney L.
2016-03-01
A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for axes and [h k l] planes up to (5 5 5). The program is open source and available at
Piecewise-quartics and exponential parameterization for interpolating reduced data
NASA Astrophysics Data System (ADS)
Kozera, R.
2016-06-01
We examine the asymptotics of a piecewise-quartic Lagrange interpolation used to fit reduced data in arbitrary Euclidean space which are sampled more-or-less uniformly. The unknown interpolation knots are guessed here according to the so-called exponential parameterization which depends on a single parameter λ ∈ [0, 1]. In this work we demonstrate numerically an abrupt discontinuity in the quality of the discussed interpolation scheme yielding a slow linear convergence order for all λ ∈ [0, 1). On the other hand, as well-known the quality of the curve approximation for λ = 1 sharply increases to the fast sharp quartic order which can be further accelerated for special subfamilies of more-or-less uniform samplings.
Parameterization of Aerosol Sinks in Chemical Transport Models
NASA Technical Reports Server (NTRS)
Colarco, Peter
2012-01-01
The modelers point of view is that the aerosol problem is one of sources, evolution, and sinks. Relative to evolution and sink processes, enormous attention is given to the problem of aerosols sources, whether inventory based (e.g., fossil fuel emissions) or dynamic (e.g., dust, sea salt, biomass burning). On the other hand, aerosol losses in models are a major factor in controlling the aerosol distribution and lifetime. Here we shine some light on how aerosol sinks are treated in modern chemical transport models. We discuss the mechanisms of dry and wet loss processes and the parameterizations for those processes in a single model (GEOS-5). We survey the literature of other modeling studies. We additionally compare the budgets of aerosol losses in several of the ICAP models.
A Step Towards an Advanced Parameterization of Cloud Microphysical Processes
NASA Astrophysics Data System (ADS)
Beheng, K. D.
2002-12-01
Consideration of cloud microphysical properties and processes in atmospheric models usually requires reliable and accurate parameterizations. For describing all hydrometeor types by size distribution functions and corresponding budget equations comprising a multitude of processes in an adapted manner is by far too costly. An alternative is to only deal with certain integrals (i.e. moments of the size spectra as, e.g., water contents) and their tendency equations. Moreover, the parameter formulae should comply with the natural situation of having smaller (cloud) and larger (precipitation) particles which interact by collisions in a complex way. Many years ago this idea has been elaborated by Kessler (1969) for liquid (warm) clouds. Kessler presented a rate equation for the transformation of cloud water content to rainwater mass (autoconversion) which relies on high intuition and another one for accretion, i.e. for the increase of rainwater content by mutual collection of cloud droplets by raindrops, which is based on a simplistic evaluation of the collection integrals of the spectral budget equation for drops. This first approach to parameterize the evolution of rain water from cloud water is a very important one since almost all clouds start as liquid clouds. For a long time and also to date these so-called Kessler formulae were the only parameterization available for warm cloud processes. In adopting this idea corresponding formulations have also been derived and extensively applied for mixed and ice cloud microphysics. The drawback of Kesslers formulation is that it only uses (cloud and rain) water contents such that a differentiation between continental and maritime clouds exhibiting very different size spectra but identical water contents is not possible. To overcome this deficiency and to include typical cloud characteristics several authors extended Kessler's idea by formulating - in addition to the rates of change of mass contents - rates for the
Criteria and algorithms for spectrum parameterization of MST radar signals
NASA Technical Reports Server (NTRS)
Rastogi, P. K.
1984-01-01
The power spectra S(f) of MST radar signals contain useful information about the variance of refractivity fluctuations, the mean radial velocity, and the radial velocity variance in the atmosphere. When noise and other contaminating signals are absent, these quantities can be obtained directly from the zeroth, first and second order moments of the spectra. A step-by-step procedure is outlined that can be used effectively to reduce large amounts of MST radar data-averaged periodograms measured in range and time to a parameterized form. The parameters to which a periodogram can be reduced are outlined and the steps in the procedure, that may be followed selectively, to arrive at the final set of reduced parameters are given. Examples of the performance of the procedure are given and its use with other radars are commented on.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical
Precisely parameterized experimental and computational models of tissue organization†
Sekar, Rajesh B.; Blake, Robert; Park, JinSeok; Trayanova, Natalia A.; Tung, Leslie; Levchenko, Andre
2016-01-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell–cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and
Arthrodial joint markerless cross-parameterization and biomechanical visualization.
Marai, G Elisabeta; Grimm, Cindy M; Laidlaw, David H
2007-01-01
Abstract-Orthopedists invest significant amounts of effort and time trying to understand the biomechanics of arthrodial (gliding) joints. Although new image acquisition and processing methods currently generate richer-than-ever geometry and kinematic data sets that are individual specific, the computational and visualization tools needed to enable the comparative analysis and exploration of these data sets lag behind. In this paper, we present a framework that enables the cross-data-set visual exploration and analysis of arthrodial joint biomechanics. Central to our approach is a computer-vision-inspired markerless method for establishing pairwise correspondences between individual-specific geometry. Manifold models are subsequently defined and deformed from one individual-specific geometry to another such that the markerless correspondences are preserved while minimizing model distortion. The resulting mutually consistent parameterization and visualization allow the users to explore the similarities and differences between two data sets and to define meaningful quantitative measures. We present two applications of this framework to human-wrist data: articular cartilage transfer from cadaver data to in vivo data and cross-data-set kinematics analysis. The method allows our users to combine complementary geometries acquired through different modalities and thus overcome current imaging limitations. The results demonstrate that the technique is useful in the study of normal and injured anatomy and kinematics of arthrodial joints. In principle, the pairwise cross-parameterization method applies to all spherical topology data from the same class and should be particularly beneficial in instances where identifying salient object features is a nontrivial task. PMID:17622690
Precisely parameterized experimental and computational models of tissue organization.
Molitoris, Jared M; Paliwal, Saurabh; Sekar, Rajesh B; Blake, Robert; Park, JinSeok; Trayanova, Natalia A; Tung, Leslie; Levchenko, Andre
2016-02-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell-cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and topology
Parameterization of tree-ring growth in Siberia
NASA Astrophysics Data System (ADS)
Tychkov, Ivan; Popkova, Margarita; Shishov, Vladimir; Vaganov, Eugene
2016-04-01
No doubt, climate-tree growth relationship is an one of the useful and interesting subject of studying in dendrochronology. It provides an information of tree growth dependency on climatic environment, but also, gives information about growth conditions and whole tree-ring growth process for long-term periods. New parameterization approach of the Vaganov-Shashkin process-based model (VS-model) is developed to described critical process linking climate variables with tree-ring formation. The approach (co-called VS-Oscilloscope) is presented as a computer software with graphical interface. As most process-based tree-ring models, VS-model's initial purpose is to describe variability of tree-ring radial growth due to variability of climatic factors, but also to determinate principal factors limiting tree-ring growth. The principal factors affecting on the growth rate of cambial cells in the VS-model are temperature, day light and soil moisture. Detailed testing of VS-Oscilloscope was done for semi-arid area of southern Siberia (Khakassian region). Significant correlations between initial tree-ring chronologies and simulated tree-ring growth curves were obtained. Direct natural observations confirm obtained simulation results including unique growth characteristic for semi-arid habitats. New results concerning formation of wide and narrow rings under different climate conditions are considered. By itself the new parameterization approach (VS-oscilloscope) is an useful instrument for better understanding of various processes in tree-ring formation. The work was supported by the Russian Science Foundation (RSF # 14-14-00219).
Research on aerosol profiles and parameterization scheme in Southeast China
NASA Astrophysics Data System (ADS)
Wang, Gang; Deng, Tao; Tan, Haobo; Liu, Xiantong; Yang, Honglong
2016-09-01
The vertical distribution of the aerosol extinction coefficient serves as a basis for evaluating aerosol radiative forcing and air quality modeling. In this study, MODIS AOD data and ground-based lidar extinction coefficients were employed to verify 6 years (2009-2014) aerosol extinction data obtained via CALIOP for Southeast China. The objective was mainly to provide the parameterization scheme of annual and seasonal aerosol extinction profiles. The results showed that the horizontal and vertical distributions of CALIOP extinction data were highly accurate in Southeast China. The annual average AOD below 2 km accounted for 64% of the total layer, with larger proportions observed in winter (80%) and autumn (80%) and lower proportions observed in summer (70%) and spring (59%). The AOD was maximum in the spring (0.58), followed by the autumn and winter (0.44), and reached a minimum in the summer (0.40). The near-surface extinction coefficient increased from summer, spring, autumn and winter, in that order. The Elterman profile is obviously lower than the profiles observed by CALIOP in Southeast China. The annual average and seasonal aerosol profiles showed an exponential distribution, and could be divided into two sections. Two sections exponential fitting was used in the parameterization scheme. In the first section, the aerosol scale height reached 2200 m with a maximum (3,500 m) in summer and a minimum (1,230 m) in winter, which meant that the aerosol extinction decrease with height slower in summer, but more rapidly in winter. In second section, the aerosol scale height was maximum in spring, which meant that the higher aerosol diffused in spring.
Parameterization of Infrared Absorption in Midlatitude Cirrus Clouds
Sassen, Kenneth; Wang, Zhien; Platt, C.M.R.; Comstock, Jennifer M.
2003-01-01
Employing a new approach based on combined Raman lidar and millimeter-wave radar measurements and a parameterization of the infrared absorption coefficient {sigma}{sub a}(km{sup -1}) in terms of retrieved cloud microphysics, we derive a statistical relation between {sigma}{sub a} and cirrus cloud temperature. The relations {sigma}{sub a} = 0.3949 + 5.3886 x 10{sup -3} T + 1.526 x 10{sup -5} T{sup 2} for ambient temperature (T,{sup o}C), and {sigma}{sub a} = 0.2896 + 3.409 x 10{sup -3} T{sub m} for midcloud temperature (T{sub m}, {sup o}C), are found using a second order polynomial fit. Comparison with two {sigma}{sub a} versus T{sub m} relations obtained primarily from midlatitude cirrus using the combined lidar/infrared radiometer (LIRAD) approach reveals significant differences. However, we show that this reflects both the previous convention used in curve fitting (i. e., {sigma}{sub a} {yields} 0 at {approx} 80 C), and the types of clouds included in the datasets. Without such constraints, convergence is found in the three independent remote sensing datasets within the range of conditions considered valid for cirrus (i.e., cloud optical depth {approx} 3.0 and T{sub m} < {approx}20 C). Hence for completeness we also provide reanalyzed parameterizations for a visible extinction coefficient {sigma}{sub a} versus T{sub m} relation for midlatitude cirrus, and a data sample involving cirrus that evolved into midlevel altostratus clouds with higher optical depths.
Synthesizing 3D Surfaces from Parameterized Strip Charts
NASA Technical Reports Server (NTRS)
Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri
2004-01-01
We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.
Mesoscale Eddy Parameterization in an Idealized Primitive Equations Model
NASA Astrophysics Data System (ADS)
Anstey, J.; Zanna, L.
2014-12-01
Large-scale ocean currents such as the Gulf Stream and Kuroshio Extension are strongly influenced by mesoscale eddies, which have spatial scales of order 10-100 km. The effects of these eddies are poorly represented in many state-of-the-art ocean general circulation models (GCMs) due to the inadequate spatial resolution of these models. In this study we examine the response of the large-scale ocean circulation to the rectified effects of eddy forcing - i.e., the role played by surface-intensified mesoscale eddies in sustaining and modulating an eastward jet that separates from an intense western boundary current (WBC). For this purpose a primitive equations ocean model (the MITgcm) in an idealized wind-forced double-gyre configuration is integrated at eddy-resolving resolution to reach a forced-dissipative equilibrium state that captures the essential dynamics of WBC-extension jets. The rectified eddy forcing is diagnosed as a stochastic function of the large-scale state, this being characterized by the manner in which potential vorticity (PV) contours become deformed. Specifically, a stochastic function based on the Laplacian of the material rate of change of PV is examined in order to compare the primitive equations results with those of a quasi-geostrophic model in which this function has shown some utility as a parameterization of eddy effects (Porta Mana and Zanna, 2014). The key question is whether an eddy parameterization based on quasi-geostrophic scaling is able to carry over to a system in which this scaling is not imposed (i.e. the primitive equations), in which unbalanced motions occur.
Nejadgholi, Isar; Caytak, Herschel; Bolic, Miodrag; Batkin, Izmail; Shirmohammadi, Shervin
2015-05-01
In several applications of bioimpedance spectroscopy, the measured spectrum is parameterized by being fitted into the Cole equation. However, the extracted Cole parameters seem to be inconsistent from one measurement session to another, which leads to a high standard deviation of extracted parameters. This inconsistency is modeled with a source of random variations added to the voltage measurement carried out in the time domain. These random variations may originate from biological variations that are irrelevant to the evidence that we are investigating. Yet, they affect the voltage measured by using a bioimpedance device based on which magnitude and phase of impedance are calculated.By means of simulated data, we showed that Cole parameters are highly affected by this type of variation. We further showed that singular value decomposition (SVD) is an effective tool for parameterizing bioimpedance measurements, which results in more consistent parameters than Cole parameters. We propose to apply SVD as a preprocessing method to reconstruct denoised bioimpedance measurements. In order to evaluate the method, we calculated the relative difference between parameters extracted from noisy and clean simulated bioimpedance spectra. Both mean and standard deviation of this relative difference are shown to effectively decrease when Cole parameters are extracted from preprocessed data in comparison to being extracted from raw measurements.We evaluated the performance of the proposed method in distinguishing three arm positions, for a set of experiments including eight subjects. It is shown that Cole parameters of different positions are not distinguishable when extracted from raw measurements. However, one arm position can be distinguished based on SVD scores. Moreover, all three positions are shown to be distinguished by two parameters, R0/R∞ and Fc, when Cole parameters are extracted from preprocessed measurements. These results suggest that SVD could be considered as an
SU-E-T-597: Parameterization of the Photon Beam Dosimetry for a Commercial Linear Accelerator
Lebron, S; Lu, B; Yan, G; Kahler, D; Li, J; Barraclough, B; Liu, C
2015-06-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modelled data, (3) a linear accelerator’s (linac) beam characteristics quality assurance process, and (4) establishing a standard data set for data comparison, etcetera. Parameterization of the photon beam dosimetry creates a portable data set that is easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon percentage depth doses(PDD), profiles, and total scatter output factors(Scp). Methods: Scp, PDDs and profiles for different field sizes (from 2×2 to 40×40cm{sup 2}), depths and energies were measured in a linac using a three-dimensional water tank. All data were smoothed and profile data were also centered, symmetrized and geometrically scaled. The Scp and PDD data were analyzed using exponential functions. For modelling of open and wedge field profiles, each side was divided into three regions described by exponential, sigmoid and Gaussian equations. The model’s equations were chosen based on the physical principles described by these dosimetric quantities. The equations’ parameters were determined using a least square optimization method with the minimal amount of measured data necessary. The model’s accuracy was then evaluated via the calculation of absolute differences and distance–to–agreement analysis in low gradient and high gradient regions, respectively. Results: All differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 mm and 0.5 mm, respectively. Differences in the low gradient regions were 0.20 ± 0.20% and 0.50 ± 0.35% for PDDs and profiles, respectively. For Scp data, all differences were less than 0.5%. Conclusion: This novel analytical model with minimum measurement requirements proved to accurately
Parameterizations for convective transport in various cloud-topped boundary layers
NASA Astrophysics Data System (ADS)
Sikma, M.; Ouwersloot, H. G.
2015-09-01
We investigate the representation of convective transport of atmospheric compounds by boundary layer clouds. We focus on three key parameterizations that, when combined, express this transport: the area fraction of transporting clouds, the upward velocity in the cloud cores and the chemical concentrations at cloud base. The first two parameterizations combined represent the kinematic mass flux by clouds. To investigate the key parameterizations under a wide range of conditions, we use large-eddy simulation model data for 10 meteorological situations, characterized by either shallow cumulus or stratocumulus clouds. The parameterizations have not been previously tested with such large data sets. In the analysis, we show that the parameterization of the area fraction of clouds currently used in mixed-layer models is affected by boundary layer dynamics. Therefore, we (i) simplify the independent variable used for this parameterization, Q1, by considering the variability in moisture rather than in the saturation deficit and update the parameters in the parameterization to account for this simplification. We (ii) next demonstrate that the independent variable has to be evaluated locally to capture cloud presence. Furthermore, we (iii) show that the area fraction of transporting clouds is not represented by the parameterization for the total cloud area fraction, as is currently assumed in literature. To capture cloud transport, a novel active cloud area fraction parameterization is proposed. Subsequently, the scaling of the upward velocity in cloud cores by the Deardorff convective velocity scale and the parameterization for the concentration of atmospheric reactants at cloud base from literature are verified and improved by analysing six shallow cumulus cases. For the latter, we additionally discuss how the parameterization is affected by wind conditions. This study contributes to a more accurate estimation of convective transport, which occurs at sub-grid scales.
NASA Astrophysics Data System (ADS)
Scarpa, Riccardo; Thiene, Mara; Hensher, David A.
2012-01-01
Preferences for attributes of complex goods may differ substantially among members of households. Some of these goods, such as tap water, are jointly supplied at the household level. This issue of jointness poses a series of theoretical and empirical challenges to economists engaged in empirical nonmarket valuation studies. While a series of results have already been obtained in the literature, the issue of how to empirically measure these differences, and how sensitive the results are to choice of model specification from the same data, is yet to be clearly understood. In this paper we use data from a widely employed form of stated preference survey for multiattribute goods, namely choice experiments. The salient feature of the data collection is that the same choice experiment was applied to both partners of established couples. The analysis focuses on models that simultaneously handle scale as well as preference heterogeneity in marginal rates of substitution (MRS), thereby isolating true differences between members of couples in their MRS, by removing interpersonal variation in scale. The models employed are different parameterizations of the mixed logit model, including the willingness to pay (WTP)-space model and the generalized multinomial logit model. We find that in this sample there is some evidence of significant statistical differences in values between women and men, but these are of small magnitude and only apply to a few attributes.
High-precision positioning of radar scatterers
NASA Astrophysics Data System (ADS)
Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.
2016-05-01
Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
NASA Astrophysics Data System (ADS)
Brown, Steven S.; Dubé, William P.; Fuchs, Hendrik; Ryerson, Thomas B.; Wollny, Adam G.; Brock, Charles A.; Bahreini, Roya; Middlebrook, Ann M.; Neuman, J. Andrew; Atlas, Elliot; Roberts, James M.; Osthoff, Hans D.; Trainer, Michael; Fehsenfeld, Frederick C.; Ravishankara, A. R.
2009-04-01
This paper presents determinations of reactive uptake coefficients for N2O5, γ(N2O5), on aerosols from nighttime aircraft measurements of ozone, nitrogen oxides, and aerosol surface area on the NOAA P-3 during Second Texas Air Quality Study (TexAQS II). Determinations based on both the steady state approximation for NO3 and N2O5 and a plume modeling approach yielded γ(N2O5) substantially smaller than current parameterizations used for atmospheric modeling and generally in the range 0.5-6 × 10-3. Dependence of γ(N2O5) on variables such as relative humidity and aerosol composition was not apparent in the determinations, although there was considerable scatter in the data. Determinations were also inconsistent with current parameterizations of the rate coefficient for homogenous hydrolysis of N2O5 by water vapor, which may be as much as a factor of 10 too large. Nocturnal halogen activation via conversion of N2O5 to ClNO2 on chloride aerosol was not determinable from these data, although limits based on laboratory parameterizations and maximum nonrefractory aerosol chloride content showed that this chemistry could have been comparable to direct production of HNO3 in some cases.
NASA Astrophysics Data System (ADS)
Xia, Xiangao
2015-09-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m-2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm-2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available.
The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2003-11-21
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
Technology Transfer Automated Retrieval System (TEKTRAN)
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Frank A.
1995-01-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model
Seaman, N.L.; Kain, J.S.; Deng, A.
1996-04-01
A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
Technology Transfer Automated Retrieval System (TEKTRAN)
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
NASA Astrophysics Data System (ADS)
Chandra, A.; Kollias, P.; Albrecht, B. A.; Zhu, P.; Klein, S. A.; Zhang, Y.
2010-12-01
Shallow cumulus clouds have significant impact on the vertical distributions of heat and moisture and on surface energy fluxes over land through their effect on incoming shortwave radiation. The present resolutions of General Circulation Model (GCM) and Numerical weather prediction (NWP) models are not fine enough to simulate shallow clouds directly, leaving not much choice other than parameterizations evaluated using either Large Eddy Simulation (LES) and observations. The representation of these clouds in numerical models is an important and challenging issue in model development, because of its potential impacts on near-surface weather and long-term climate simulations. Recent studies through LES have shown that the mass flux is the important parameter for determining the characteristics of cumulus transports within cloud layer. Based on LES results and scaling arguments, substantial efforts have been made to parameterize the cloud base mass flux to improve the interactions between the subcloud and cloud layer. Despite these efforts, what factors control the mass flux and how the interaction between subcloud and cloud layers should be parameterized is not fully understood. From the observational perspective, studies have been done using aircraft and remote sensing platform to address the above issue; there have been insufficient observations to develop detailed composite studies under different conditions. The Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) in Southern Great Plains (SGP) offers unique long-term measurements from cloud radars (35 and 94 GHz) along with synergetic measurements to address the above problem of non-precipitating shallow cumulus clouds over the SGP region. Doppler velocities from the cloud radar are processed to remove the insect contamination using a fuzzy-logic approach before they are used for the mass-flux calculation. The present observations are used to validate the existing mass-flux relations used in
Nonsmooth optimization approaches to VDA of models with on/off parameterizations: Theoretical issues
NASA Astrophysics Data System (ADS)
Jiang, Zhu; Kamachi, Masafumi; Guangqing, Zhou
2002-05-01
Some variational data assimilation problems of time-and space-discrete models with on/ off parameterizations can be regarded as nonsmooth optimization problems. Some theoretical issues related to those problems is systematically addressed. One of the basic concept in nonsmooth optimization is subgradient, a generalized notation of a gradient of the cost function. First it is shown that the concept of subgradient leads to a clear definition of the adjoint variables in the conventional adjoint model at singular points caused by on/ off switches. Using an illustrated example of a multi-layer diffusion model with the convective adjustment, it is proved that the solution of the conventional adjoint model can not be inter-preted as Gateaux derivatives or directional derivatives, at singular points, but can be interpreted as a subgradient of the cost function. Two existing smooth optimization approaches are then reviewed which are used in current data assimi-lation practice. The first approach is the conventional adjoint model plus smooth optimization algorithms. Some conditions under which the approach can converge to the minimal are discussed. Another approach is smoothing and regularization approach, which removes some thresholds in physical parameterizations. Two nonsmooth optimization approaches are also reviewed. One is the subgradient method, which uses the conventional adjoint model. The method is convergent, but very slow. Another approach, the bundle methods are more efficient. The main idea of the bundle method is to use the minimal norm vector of subdifferential, which is the convex hull of all subgradients, as the descent director. However finding all subgradients is very difficult in general. Therefore bundle methods are modified to use only one subgradient that can be calculated by the conventional adjoint model. In order to develop an efficient bundle method, a set-valued adjoint model, as a generalization of the conventional adjoint model, is proposed. It
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Accuracy of cuticular resistance parameterizations in ammonia dry deposition models
NASA Astrophysics Data System (ADS)
Schrader, Frederik; Brümmer, Christian; Richter, Undine; Fléchard, Chris; Wichink Kruit, Roy; Erisman, Jan Willem
2016-04-01
Accurate representation of total reactive nitrogen (Nr) exchange between ecosystems and the atmosphere is a crucial part of modern air quality models. However, bi-directional exchange of ammonia (NH3), the dominant Nr species in agricultural landscapes, still poses a major source of uncertainty in these models, where especially the treatment of non-stomatal pathways (e.g. exchange with wet leaf surfaces or the ground layer) can be challenging. While complex dynamic leaf surface chemistry models have been shown to successfully reproduce measured ammonia fluxes on the field scale, computational restraints and the lack of necessary input data have so far limited their application in larger scale simulations. A variety of different approaches to modelling dry deposition to leaf surfaces with simplified steady-state parameterizations have therefore arisen in the recent literature. We present a performance assessment of selected cuticular resistance parameterizations by comparing them with ammonia deposition measurements by means of eddy covariance (EC) and the aerodynamic gradient method (AGM) at a number of semi-natural and grassland sites in Europe. First results indicate that using a state-of-the-art uni-directional approach tends to overestimate and using a bi-directional cuticular compensation point approach tends to underestimate cuticular resistance in some cases, consequently leading to systematic errors in the resulting flux estimates. Using the uni-directional model, situations where low ratios of total atmospheric acids to NH3 concentration occur lead to fairly high minimum cuticular resistances, limiting predicted downward fluxes in conditions usually favouring deposition. On the other hand, the bi-directional model used here features a seasonal cycle of external leaf surface emission potentials that can lead to comparably low effective resistance estimates under warm and wet conditions, when in practice an expected increase in the compensation point due to
Parameterization of Fire Injection Height in Large Scale Transport Model
NASA Astrophysics Data System (ADS)
Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.
2012-12-01
The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to
Parameterization of Fire Injection Height in Large Scale Transport Model
NASA Astrophysics Data System (ADS)
Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.
2012-04-01
The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is
An Empirical Cumulus Parameterization Scheme for a Global Spectral Model
NASA Technical Reports Server (NTRS)
Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.
2004-01-01
Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This
Physically-Based Parameterization of Frozen Ground Processes in Watershed Runoff Modeling
NASA Astrophysics Data System (ADS)
Koren, V. I.
2004-05-01
parameters were used at all sites. Solid and liquid soil moisture contents, and soil temperature at five layers were simulated for 3-5 years. Test results suggest that a conceptual representation of soil moisture fluxes combined with a physically-based heat transfer model provides reasonable simulations of soil temperature for the entire soil profile. Ignoring soil moisture phase transitions can lead to significant biases of soil temperature. Simulated soil moisture states also agree well with measurements for the research watershed for an 18-year period. A second set of tests was performed for a few river basins when only outlet hydrographs were evaluated. A priori water balance model parameters were adjusted using automatic or manual calibration. Simulated and observed hydrographs agree better when the frozen ground parameterization was added specifically during transition periods from spring to summer. More importantly, the un-calibrated model with the frozen ground component outperforms the un-calibrated model with no frozen ground component for all tested basins. Spring floods analysis suggests also that it is impossible to remove runoff biases without modification of frozen ground hydraulic properties.
Scatter corrections for cone beam optical CT
NASA Astrophysics Data System (ADS)
Olding, Tim; Holmes, Oliver; Schreiner, L. John
2009-05-01
Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.
Parameterization of Buoyancy Effects in Generic PWR Boron Dilution Scenarios
Galindo-Garcia, Ivan F.; Cotton, Mark A.; Axcell, Brian P.
2006-07-01
A computational investigation is undertaken into the role of buoyancy in a PWR boron dilution transient following a postulated Small Break Loss of Coolant Accident (SB-LOCA). In the scenario envisaged there is flow of de-borated and relatively high temperature water from a single cold leg into the downcomer; flow rates are typical of natural circulation conditions. The study focuses upon the development of boron concentration distributions in the downcomer and adopts a 3D-unsteady formulation of the mean flow equations in combination with the standard high-Reynolds-number k-{epsilon} turbulence model. It is found that the Richardson number (Ri = Gr/Re{sup 2}) is the most important group parameterizing the course of a concentration transient. At Ri values characterizing a 'baseline' scenario the results indicate that there is a stable, circumferentially-uniform, descent through the downcomer of a stratified region of low-borated fluid. Qualitatively the same behaviour is found at higher Richardson number, although at Ri values of approximately one-fifth the baseline level there is evidence of large-scale mixing and a consequent absence of concentration stratification. (authors)
Specialized Knowledge Representation and the Parameterization of Context.
Faber, Pamela; León-Araúz, Pilar
2016-01-01
Though instrumental in numerous disciplines, context has no universally accepted definition. In specialized knowledge resources it is timely and necessary to parameterize context with a view to more effectively facilitating knowledge representation, understanding, and acquisition, the main aims of terminological knowledge bases. This entails distinguishing different types of context as well as how they interact with each other. This is not a simple objective to achieve despite the fact that specialized discourse does not have as many contextual variables as those in general language (i.e., figurative meaning, irony, etc.). Even in specialized text, context is an extremely complex concept. In fact, contextual information can be specified in terms of scope or according to the type of information conveyed. It can be a textual excerpt or a whole document; a pragmatic convention or a whole culture; a concrete situation or a prototypical scenario. Although these versions of context are useful for the users of terminological resources, such resources rarely support context modeling. In this paper, we propose a taxonomy of context primarily based on scope (local and global) and further divided into syntactic, semantic, and pragmatic facets. These facets cover the specification of different types of terminological information, such as predicate-argument structure, collocations, semantic relations, term variants, grammatical and lexical cohesion, communicative situations, subject fields, and cultures. PMID:26941674
Comparison of parameterized cloud variability to ARM data.
Klein, Stephen A.; Norris, Joel R.
2003-06-23
Cloud parameterizations in large-scale models often try to predict the amount of sub-grid scale variability in cloud properties to address the significant non-linear effects of radiation and precipitation. Statistical cloud schemes provide an attractive framework to self-consistently predict the variability in radiation and microphysics but require accurate predictions of the width and asymmetry of the distribution of cloud properties. Data from the Atmospheric Radiation Measurement (ARM) program are used to assess the variability in boundary layer cloud properties for a well- mixed stratocumulus observed at the Oklahoma ARM site during the March 2000 Intensive Observing Period. Cloud boundaries, liquid water content, and liquid water path are retrieved from the millimeter wavelength cloud radar and the microwave radiometer. Balloon soundings, aircraft data, and satellite observations provide complementary views on the horizontal cloud inhomogeneity. It is shown that the width of the liquid water path probability distribution function is consistent with a model in which horizontal fluctuations in liquid water content are vertically coherent throughout the depth of the cloud. Variability in cloud base is overestimated by this model, however; perhaps because an additional assumption that the variance of total water is constant with altitude throughout the depth of the boundary layer is incorrect.
Parameterized locally invariant manifolds: A tool for multiscale modeling
NASA Astrophysics Data System (ADS)
Sawant, Aarti
In this thesis two methods for coarse graining in nonlinear ODE systems are demonstrated through analysis of model problems. The basic ideas of a method for model reduction and a method for non-asymptotic time-averaging are presented, using the idea of the Parameterized locally invariant manifolds. New approximation techniques for carrying out this methodology are developed. The work is divided in four categories based on the type of coarse-graining used: reduction of degrees of freedom, spatial averaging, time averaging and a combination of space and time averaging. Model problems showing complex dynamics are selected and various features of the PLIM method are elaborated. The quality and efficiency of the different coarse-graining approaches are evaluated. From the computational standpoint, it is shown that the method has the potential of serving as a subgrid modeling tool for problems in engineering. The developed ideas are evaluated on the following model problems: Lorenz System, a 4D Hamiltonian System due to Hald, 1D Elastodynamics in a strongly heterogeneous medium, kinetics of a phase transforming material with wiggly energy due to Abeyaratne, Chu and James, 2D gradient system with wiggly energy due to Menon, and macroscopic stress-strain behavior of an atomic chain based on the Frenkel-Kontorova model.
Parameterization of mires in a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Yurova, Alla; Tolstykh, Mikhail; Nilsson, Mats; Sirin, Andrey
2014-11-01
Mires (peat-accumulating wetlands) occupy 8.1% of Russian territory and are especially numerous in the western Siberian Lowlands, where they can significantly modify atmospheric heat and water balances. They also influence air temperatures and humidity in the boundary layers closest to the earth's surface. The purpose of our study was to incorporate the influence of mires into the SL-AV numerical weather prediction model, which is used operationally in the Hydrometeorological Center of Russia. This was done by adjusting the multilayer soil component (by modifying the peat thermal conductivity in the heat diffusion equation and reformulating the lower boundary condition for Richard's equation), and reformulating both the evapotranspiration and runoff from mires. When evaporation from mires was incorporated into the SL-AV model, the latent heat flux in the areas dominated by mires increased strongly, resulting in surface cooling and hence reductions in the sensible heat flux and outgoing terrestrial long-wave radiation. Presented results show that including mires significantly decreased the bias and RMSE of predictions of temperature and relative humidity 2 m above the ground for lead times of 12, 36, and 60 h from 00 h Coordinated Universal Time (evening conditions), but did not eliminate the bias in forecasts for lead times of 24, 48, and 72 h (morning conditions) in Siberia. Different parameterizations of mire evapotranspiration are also compared.
Specialized Knowledge Representation and the Parameterization of Context
Faber, Pamela
2016-01-01
Though instrumental in numerous disciplines, context has no universally accepted definition. In specialized knowledge resources it is timely and necessary to parameterize context with a view to more effectively facilitating knowledge representation, understanding, and acquisition, the main aims of terminological knowledge bases. This entails distinguishing different types of context as well as how they interact with each other. This is not a simple objective to achieve despite the fact that specialized discourse does not have as many contextual variables as those in general language (i.e., figurative meaning, irony, etc.). Even in specialized text, context is an extremely complex concept. In fact, contextual information can be specified in terms of scope or according to the type of information conveyed. It can be a textual excerpt or a whole document; a pragmatic convention or a whole culture; a concrete situation or a prototypical scenario. Although these versions of context are useful for the users of terminological resources, such resources rarely support context modeling. In this paper, we propose a taxonomy of context primarily based on scope (local and global) and further divided into syntactic, semantic, and pragmatic facets. These facets cover the specification of different types of terminological information, such as predicate-argument structure, collocations, semantic relations, term variants, grammatical and lexical cohesion, communicative situations, subject fields, and cultures. PMID:26941674
A Geodesics-Based Surface Parameterization to Assess Aneurysm Progression.
Phan, Ly; Courchaine, Katherine; Azarbal, Amir; Vorp, David; Grimm, Cindy; Rugonyi, Sandra
2016-05-01
Abdominal aortic aneurysm (AAA) intervention and surveillance is currently based on maximum transverse diameter, even though it is recognized that this might not be the best strategy. About 10% of patients with small AAA transverse diameters, for whom intervention is not considered, still rupture; while patients with large AAA transverse diameters, for whom intervention would have been recommended, have stable aneurysms that do not rupture. While maximum transverse diameter is easy to measure and track in clinical practice, one of its main drawbacks is that it does not represent the whole AAA and rupture seldom occurs in the region of maximum transverse diameter. By following maximum transverse diameter alone clinicians are missing information on the shape change dynamics of the AAA, and clues that could lead to better patient care. We propose here a method to register AAA surfaces that were obtained from the same patient at different time points. Our registration method could be used to track the local changes of the patient-specific AAA. To achieve registration, our procedure uses a consistent parameterization of the AAA surfaces followed by strain relaxation. The main assumption of our procedure is that growth of the AAA occurs in such a way that surface strains are smoothly distributed, while regions of small and large surface growth can be differentiated. The proposed methodology has the potential to unravel different patterns of AAA growth that could be used to stratify patient risks. PMID:27003915
A sea spray aerosol flux parameterization encapsulating wave state
NASA Astrophysics Data System (ADS)
Ovadnevaite, J.; Manders, A.; de Leeuw, G.; Monahan, C.; Ceburnis, D.; O'Dowd, C. D.
2013-09-01
A new sea spray source function (SSSF), termed Oceanflux Sea Spray Aerosol or OSSA, was derived based on in-situ sea spray measurements along with meteorological/physical parameters. Submicron sea spray fluxes derived from particle number concentration measurements at the Mace Head coastal station, on the west coast of Ireland, were used together with open-ocean eddy correlation flux measurements from the Eastern Atlantic (SEASAW cruise). In the overlapping size range, the data for Mace Head and SEASAW were found to be in a good agreement, which allowed deriving the new SSSF from the combined dataset spanning the dry diameter range from 15 nm to 6 μm. The sea spray production was parameterized in terms of 5 log-normal modes and the Reynolds number instead of the more commonly used wind speed, thereby encapsulating important influences of wave height and history, friction velocity and viscosity. This formulation accounts for the different flux relationships associated with rising and waning wind speeds since these are included in the Reynolds number. Furthermore, the Reynolds number incorporates the kinematic viscosity of water, thus the SSSF inherently includes a sea surface temperature dependence. The temperature dependence of the resulting SSSF is similar to that of other in-situ derived source functions and results in lower production fluxes for cold waters and enhanced fluxes from warm waters as compared with SSSF formulations that do not include temperature effects.
Factors influencing the parameterization of tropical anvils within GCMs
Bradley, M.M.; Chin, H.N.S.
1994-03-01
The overall goal of this project is to improve the representation of anvil clouds and their effects in general circulation models (GCMs). We have concentrated on an important portion of the overall goal; the evolution of cumulus-generated anvil clouds and their effects on the large-scale environment. Because of the large range of spatial and temporal scales involved, we have been using a multi-scale approach. For the early-time generation and development of the citrus anvil we are using a cloud-scale model with a horizontal resolution of 1-2 kilometers, while for the transport of anvils by the large-scale flow we are using a mesoscale model with a horizontal resolution of 10-40 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations to develop an improved cloud parameterization for use in GCMS. The cloud-scale simulation of a midlatitude squall line case and the mesoscale study of a tropical anvil using an anvil generator were presented at the last ARM science team meeting. This paper concentrates on the cloud-scale study of a tropical squall line. Results are compared with its midlatitude counterparts to further our understanding of the formation mechanism of anvil clouds and the sensitivity of radiation to their optical properties.
A sea spray aerosol flux parameterization encapsulating wave state
NASA Astrophysics Data System (ADS)
Ovadnevaite, J.; Manders, A.; de Leeuw, G.; Ceburnis, D.; Monahan, C.; Partanen, A.-I.; Korhonen, H.; O'Dowd, C. D.
2014-02-01
A new sea spray source function (SSSF), termed Oceanflux Sea Spray Aerosol or OSSA, was derived based on in-situ sea spray aerosol measurements along with meteorological/physical parameters. Submicron sea spray aerosol fluxes derived from particle number concentration measurements at the Mace Head coastal station, on the west coast of Ireland, were used together with open-ocean eddy correlation flux measurements from the Eastern Atlantic Sea Spray, Gas Flux, and Whitecap (SEASAW) project cruise. In the overlapping size range, the data for Mace Head and SEASAW were found to be in a good agreement, which allowed deriving the new SSSF from the combined dataset spanning the dry diameter range from 15 nm to 6 μm. The OSSA source function has been parameterized in terms of five lognormal modes and the Reynolds number instead of the more commonly used wind speed, thereby encapsulating important influences of wave height, wind history, friction velocity, and viscosity. This formulation accounts for the different flux relationships associated with rising and waning wind speeds since these are included in the Reynolds number. Furthermore, the Reynolds number incorporates the kinematic viscosity of water, thus the SSSF inherently includes dependences on sea surface temperature and salinity. The temperature dependence of the resulting SSSF is similar to that of other in-situ derived source functions and results in lower production fluxes for cold waters and enhanced fluxes from warm waters as compared with SSSF formulations that do not include temperature effects.
Population models for passerine birds: structure, parameterization, and analysis
Noon, B.R.; Sauer, J.R.
1992-01-01
Population models have great potential as management tools, as they use infonnation about the life history of a species to summarize estimates of fecundity and survival into a description of population change. Models provide a framework for projecting future populations, determining the effects of management decisions on future population dynamics, evaluating extinction probabilities, and addressing a variety of questions of ecological and evolutionary interest. Even when insufficient information exists to allow complete identification of the model, the modelling procedure is useful because it forces the investigator to consider the life history of the species when determining what parameters should be estimated from field studies and provides a context for evaluating the relative importance of demographic parameters. Models have been little used in the study of the population dynamics of passerine birds because of: (1) widespread misunderstandings of the model structures and parameterizations, (2) a lack of knowledge of life histories of many species, (3) difficulties in obtaining statistically reliable estimates of demographic parameters for most passerine species, and (4) confusion about functional relationships among demographic parameters. As a result, studies of passerine demography are often designed inappropriately and fail to provide essential data. We review appropriate models for passerine bird populations and illustrate their possible uses in evaluating the effects of management or other environmental influences on population dynamics. We identify environmental influences on population dynamics. We identify parameters that must be estimated from field data, briefly review existing statistical methods for obtaining valid estimates, and evaluate the present status of knowledge of these parameters.
Marine organics effect on sea-spray light scattering
NASA Astrophysics Data System (ADS)
Vaishya, Aditya; Ovadnevaite, Jurgita; Bialek, Jakub; Jennings, S. G.; Ceburnis, Darius; O'Dowd, Colin
2013-05-01
Primary-produced sea-spray is typically composed of sea-salt, but in biologically-active regions, the spray can become enriched with organic matter which reduces hygroscopicity of sea-spray, thereby having a potential impact on aerosol scattering. This study shows that scattering enhancement of marine aerosol, as a function of increasing relative humidity, is reduced when enriched with organics whose results are used to develop a new hygroscopicity growth-factor parameterization for sea-spray enriched in organic matter. The parameterization reveals a dual state which flips from high-hygroscopicity and high-scattering enhancement to low-hygroscopicity and low-scattering enhancement as the organic volume fraction increases from below ˜ 0.55 to above ˜ 0.55. In terms of organic enrichment, the effect on Top of Atmosphere (TOA) direct radiative forcing (ΔF) is to reduce the cooling contribution of sea-spray by ˜ 5.5 times compared to pure sea-salt spray. The results presented here highlight a significant coupling between the marine biosphere and the direct radiative budget through alteration of sea-spray chemical composition, potentially leading to accelerated global warming should biological activity increase with future projected temperature increases.
Global direct radiative forcing by process-parameterized aerosol optical properties
NASA Astrophysics Data System (ADS)
KirkevâG, Alf; Iversen, Trond
2002-10-01
A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.
Kuo-Nan Liou
2003-12-29
OAK-B135 (a) We developed a 3D radiative transfer model to simulate the transfer of solar and thermal infrared radiation in inhomogeneous cirrus clouds. The model utilized a diffusion approximation approach (four-term expansion in the intensity) employing Cartesian coordinates. The required single-scattering parameters, including the extinction coefficient, single-scattering albedo, and asymmetry factor, for input to the model, were parameterized in terms of the ice water content and mean effective ice crystal size. The incorporation of gaseous absorption in multiple scattering atmospheres was accomplished by means of the correlated k-distribution approach. In addition, the strong forward diffraction nature in the phase function was accounted for in each predivided spatial grid based on a delta-function adjustment. The radiation parameterization developed herein is applied to potential cloud configurations generated from GCMs to investigate broken clouds and cloud-overlapping effects on the domain-averaged heating rate. Cloud inhomogeneity plays an important role in the determination of flux and heating rate distributions. Clouds with maximum overlap tend to produce less heating than those with random overlap. Broken clouds show more solar heating as well as more IR cooling as compared to a continuous cloud field (Gu and Liou, 2001). (b) We incorporated a contemporary radiation parameterization scheme in the UCLA atmospheric GCM in collaboration with the UCLA GCM group. In conjunction with the cloud/radiation process studies, we developed a physically-based cloud cover formation scheme in association with radiation calculations. The model clouds were first vertically grouped in terms of low, middle, and high types. Maximum overlap was then used for each cloud type, followed by random overlap among the three cloud types. Fu and Liou's 1D radiation code with modification was subsequently employed for pixel-by-pixel radiation calculations in the UCLA GCM. We showed
A dynamical ammonia emission parameterization for use in air pollution models
NASA Astrophysics Data System (ADS)
GyldenkæRne, Steen; Ambelas SkjøTh, Carsten; Hertel, Ole; Ellermann, Thomas
2005-04-01
A parameterization of the temporal variation of ammonia (NH3) emission into the atmosphere is proposed. The parameterization relies on several simple submodels reflecting emission from stores and barns, agricultural practice in application of manure, and emission from grown crops. Some of the submodels depend on a simple crop growth model, which again depends on temperature variations throughout the year. The parameterization reflects the differences in agricultural practices, differences in the climate due to latitude/longitude, and differences in meteorological conditions between years. Measured as well as modeled meteorology can be applied by the parameterization, which is developed for use down to single farm level as well as in large-scale physical/statistical Eulerian and Lagrangian air pollution models. The parameterization is based on simple principles and is applied for northwestern Europe. The simple principles ensure that the parameterization may be adapted to other climatic conditions. The proposed parameterization is considered as a large improvement compared to previous simpler models with fixed seasonal variation.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; et al
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; et al
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; Goldhaber, Steve; Bogenschutz, Peter; Chen, Chih-Chieh; Morrison, H.; Hoft, Jan; Raut, E.; Griffin, Brian M.; Weber, J. K.; Larson, Vincent E.; Wyant, M. C.; Wang, Minghuai; Guo, Zhun; Ghan, Steven J.
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2004-05-06
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping. PMID:24976795
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.
2015-06-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.
Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.
2009-01-01
Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Towards a parameterization of convective wind gusts in Sahel
NASA Astrophysics Data System (ADS)
Largeron, Yann; Guichard, Françoise; Bouniol, Dominique; Couvreux, Fleur; Birch, Cathryn; Beucher, Florent
2014-05-01
] who focused on the wet tropical Pacific region, and linked wind gusts to convective precipitation rates alone, here, we also analyse the subgrid wind distribution during convective events, and quantify the statistical moments (variance, skewness and kurtosis) in terms of mean wind speed and convective indexes such as DCAPE. Next step of the work will be to formulate a parameterization of the cold pool convective gust from those probability density functions and analytical formulaes obtained from basic energy budget models. References : [Carslaw et al., 2010] A review of natural aerosol interactions and feedbacks within the earth system. Atmospheric Chemistry and Physics, 10(4):1701{1737. [Engelstaedter et al., 2006] North african dust emissions and transport. Earth-Science Reviews, 79(1):73{100. [Knippertz and Todd, 2012] Mineral dust aerosols over the sahara: Meteorological controls on emission and transport and implications for modeling. Reviews of Geophysics, 50(1). [Marsham et al., 2011] The importance of the representation of deep convection for modeled dust-generating winds over west africa during summer.Geophysical Research Letters, 38(16). [Marticorena and Bergametti, 1995] Modeling the atmospheric dust cycle: 1. design of a soil-derived dust emission scheme. Journal of Geophysical Research, 100(D8):16415{16. [Menut, 2008] Sensitivity of hourly saharan dust emissions to ncep and ecmwf modeled wind speed. Journal of Geophysical Research: Atmospheres (1984{2012), 113(D16). [Pierre et al., 2012] Impact of vegetation and soil moisture seasonal dynamics on dust emissions over the sahel. Journal of Geophysical Research: Atmospheres (1984{2012), 117(D6). [Redelsperger et al., 2000] A parameterization of mesoscale enhancement of surface fluxes for large-scale models. Journal of climate, 13(2):402{421.
Cirrus cloud model parameterizations: Incorporating realistic ice particle generation
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Dodd, G. C.; Starr, David OC.
1990-01-01
Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.
Evapotranspiration Parameterizations at a Grass Site in Florida, USA
NASA Astrophysics Data System (ADS)
Rizou, M.; Sumner, D. M.; Nnadi, F.
2007-05-01
In spite of the fact that grasslands account for about 40% of the ice-free global terrestrial land cover, their contribution to the surface exchanges of energy and water in local and regional scale is so far uncertain. In this study, the sensitivity of evapotranspiration (ET) and other energy fluxes to wetness variables, namely the volumetric Soil Water Content (SWC) and Antecedent Precipitation Index (API), over a non-irrigated grass site in Central Florida, USA (28.049 N, 81.400 W) were investigated. Eddy correlation and soil water content measurements were taken by USGS (U.S. Geological Survey) at the grass study site, within 100 m of a SFWMD (South Florida Water Management District) weather station. The soil is composed of fine sands and it is mainly covered by Paspalum notatum (bahia grass). Variable soil wetness conditions with API bounds of about 2 to 160 mm and water table levels of 0.03 to 1.22 m below ground surface, respectively, were observed throughout the year 2004. The Bowen ratio exhibited an average of 1 and values larger than 2 during few dry days. The daytime average ET was classified into two stages, first stage (energy-limited) and second stage (water- limited) based on the water availability. The critical values of API and SWC were found to be about 56 mm and 0.17 respectively, with the second one being approximately 33% of the SWC at saturation. The ET values estimated by the simple Priestley-Taylor (PT) method were compared to the actual values. The PT coefficient varied from a low bound of approximately 0.4 to a peak of 1.21. Simple relationships for the PT empirical factor were employed in terms of SWC and API to improve the accuracy of the second stage observations. The results of the ET parameterizations closely match eddy-covariance flux values on daily and longer time steps.
Evapotranspiration parameterizations at a grass site in Florida, USA
Rizou, M.; Sumner, David M.; Nnadi, F.
2007-01-01
In spite of the fact that grasslands account for about 40% of the ice-free global terrestrial land cover, their contribution to the surface exchanges of energy and water in local and regional scale is so far uncertain. In this study, the sensitivity of evapotranspiration (ET) and other energy fluxes to wetness variables, namely the volumetric Soil Water Content (SWC) and Antecedent Precipitation Index (API), over a non-irrigated grass site in Central Florida, USA (28.049 N, 81.400 W) were investigated. Eddy correlation and soil water content measurements were taken by USGS (U.S. Geological Survey) at the grass study site, within 100 m of a SFWMD (South Florida Water Management District) weather station. The soil is composed of fine sands and it is mainly covered by Paspalum notatum (bahia grass). Variable soil wetness conditions with API bounds of about 2 to 160 mm and water table levels of 0.03 to 1.22 m below ground surface, respectively, were observed throughout the year 2004. The Bowen ratio exhibited an average of 1 and values larger than 2 during few dry days. The daytime average ET was classified into two stages, first stage (energy-limited) and second stage (water- limited) based on the water availability. The critical values of API and SWC were found to be about 56 mm and 0.17 respectively, with the second one being approximately 33% of the SWC at saturation. The ET values estimated by the simple Priestley-Taylor (PT) method were compared to the actual values. The PT coefficient varied from a low bound of approximately 0.4 to a peak of 1.21. Simple relationships for the PT empirical factor were employed in terms of SWC and API to improve the accuracy of the second stage observations. The results of the ET parameterizations closely match eddy-covariance flux values on daily and longer time steps.
Parameterized Spectral Bathymetric Roughness Using the Nonequispaced Fast Fourier Transform
NASA Astrophysics Data System (ADS)
Fabre, David Hanks
The ocean and acoustic modeling community has specifically asked for roughness from bathymetry. An effort has been undertaken to provide what can be thought of as the high frequency content of bathymetry. By contrast, the low frequency content of bathymetry is the set of contours. The two-dimensional amplitude spectrum calculated with the nonequispaced fast Fourier transform (Kunis, 2006) is exploited as the statistic to provide several parameters of roughness following the method of Fox (1996). When an area is uniformly rough, it is termed isotropically rough. When an area exhibits lineation effects (like in a trough or a ridge line in the bathymetry), the term anisotropically rough is used. A predominant spatial azimuth of lineation summarizes anisotropic roughness. The power law model fit produces a roll-off parameter that also provides insight into the roughness of the area. These four parameters give rise to several derived parameters. Algorithmic accomplishments include reviving Fox's method (1985, 1996) and improving the method with the possibly geophysically more appropriate nonequispaced fast Fourier transform. A new composite parameter, simply the overall integral length of the nonlinear parameterizing function, is used to make within-dataset comparisons. A synthetic dataset and six multibeam datasets covering practically all depth regimes have been analyzed with the tools that have been developed. Data specific contributions include possibly discovering an aspect ratio isotropic cutoff level (less than 1.2), showing a range of spectral fall-off values between about -0.5 for a sandybottomed Gulf of Mexico area, to about -1.8 for a coral reef area just outside of the Saipan harbor. We also rank the targeted type of dataset, the best resolution gridded datasets, from smoothest to roughest using a factor based on the kernel dimensions, a percentage from the windowing operation, all multiplied by the overall integration length.
A New Non-Iterative Scheme for Surface Fluxes Parameterization
NASA Astrophysics Data System (ADS)
Gao, Z.
2015-12-01
In weather or climate models, the earth's surface is the boundary that needs to be resolved physically. The condition of atmosphere aloft (e.g., wind, temperature and humidity) is highly dependent on the momentum, sensible heat and latent heat fluxes at surface. However, parameterization of surface turbulent fluxes under unstably/stably stratified conditions has always been a challenge. Currently, the exchanges of momentum and heat fluxes between the earth's surface and the atmosphere are usually calculated with various schemes based on Monin-Obukhov similarity theory. These schemes either need iterations or suffer low accuracy, which might consume excessive CPU time or could lead to unrealistic simulation results. In this paper, a non-iterative scheme is proposed to approach the classic iterative computation results using multiple regressions.The range -5 ≤ RiB ≤ 2.5, 10 ≤ z/z0m ≤ 105 and -0.5 ≤ ln(z0m/z0h) ≤ 30 is divided into several regions, and in each of the regions, multiple linear regression is performed to obtain non-iterative solutions for surface fluxes. As compared to the other most recent non-iterative schemes, we show that the suggested scheme has the smallest bias. The maximum relative errors of turbulent transfer coefficients for momentum (CM) and sensible heat (CH), as compared to those obtained from the classic iterative method, are always smaller than 2% in unstable condition and 12% in stable condition from our new non-iterative scheme.
Generating Single-sided Subduction with Parameterized Mantle Wedge
NASA Astrophysics Data System (ADS)
Lin, C. J.; Tan, E.; Ma, K. F.
2015-12-01
Subduction on Earth is one-sided, where one oceanic plate sinks beneath the overriding plate. However, subduction zones in most numerical models tends to develop two-sided subduction, where both plates sink to the mantle. In this study, we use numerical model to find out how the existence of low viscosity wedge (LVW) can enable single-sided subduction and affects the flow in the subduction system.At the mantle wedge, water released from dehydrated oceanic crust serpentinized the mantle, which forms the LVW. LVW is an important part of the subduction system and provides efficient lubricant between the subducting slab and overriding lithosphere. Single-sided subduction can be generated in numerical models by different techniques, including prescribed plate velocity, non-Newtonian rheology, and free surface. These techniques either requires kinematic boundary condition, which produce mantle flow inconsistent with the buoyancy, or costs great amount of computational resources when solving nonlinear equations. In this study, we tried to generating single-sided subduction with Newtonian viscosity and free slip surface. A set of tracers representing hydrated oceanic crust are placed near the surface. As the tracers subducted with the lithosphere, we assume that the oceanic crust becomes dehydrated and serpentinizes the mantle wedge above. A parameterized LVW is placed above the subducted tracers in the models. We test with different upper/lower depth limits of the LVW and the viscosity of the LVW. Both overriding plate and subducting plate's surface velocity relative to the trench is calculated in order to determine whether the subduction is one-sided.Results of our numerical models show that not only the low viscosity wedge above the slab is essential for the formation of one-side subduction, a low viscosity layer in between two tectonic plates is also needed to provide the slab efficient lubricant after the subduction started. On the other hand, the plate's age, which
Search for subgrid scale parameterization by projection pursuit regression
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Moin, Parviz
1992-01-01
The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important
Parameterizations for shielding electron accelerators based on Monte Carlo studies
P. Degtyarenko; G. Stapleton
1996-10-01
Numerous recipes for designing lateral slab neutron shielding for electron accelerators are available and each generally produces rather similar results for shield thicknesses of about 2 m of concrete and for electron beams with energy in the 1 to 10 GeV region. For thinner or much thicker shielding the results tend to diverge and the standard recipes require modification. Likewise for geometries other than lateral to the beam direction further corrections are required so that calculated results are less reliable and hence additional and costly conservatism is needed. With the adoption of Monte Carlo (MC) methods of transporting particles a much more powerful way of calculating radiation dose rates outside shielding becomes available. This method is not constrained by geometry, although deep penetration problems need special statistical treatment, and is an excellent approach to solving any radiation transport problem providing the method has been properly checked against measurements and is free from the well known errors common to such computer methods. This present paper utilizes the results of MC calculations based on a nuclear fragmentation model named DINREG using the MC transport code GEANT and models them with the normal two parameter shielding expressions. Because the parameters can change with electron beam energy, angle to the electron beam direction and target material, the parameters are expressed as functions of some of these variables to provide a universal equations for shielding electron beams which can used rather simply for deep penetration problems in simple geometry without the time consuming computations needed in the original MC programs. A particular problem with using simple parameterizations based on the uncollided flux is that approximations based on spherical geometry might not apply to the more common cylindrical cases used for accelerator shielding. This source of error has been discussed at length by Stevenson and others. To study
Impact of Roughness Parameterization on Mistral and Tramontane Simulations
NASA Astrophysics Data System (ADS)
Obermann, Anika; Edelmann, Benedikt; Ahrens, Bodo
2016-04-01
The Mistral and Tramontane are mesoscale winds in the Mediterranean region that travel through valleys in southern France. The cold and dry Mistral blows from the north to northwest, and travels down the Rhône valley, between the Alps and Massif Central. The Tramontane travels the Aude valley between the Massif Central and Pyrenees. Over the sea, these winds cause deep-water generation, and thus impact the hydrological cycle of the Mediterranean Sea. The occurrence and characteristics of Mistral and Tramontane depend on the synoptic situation, the channeling effects through mountain barriers, and land and sea surface characteristics. We evaluate Mistral and Tramontane wind speed and direction patterns in several regional climate models from the MedCORDEX framework with respect to these challenges for modeling. The effect of sea surface roughness parameterization on the quality of wind speed and direction modeling is evaluated. Emphasis is on spatial patterns in the areas of Mistral and Tramontane as well as the overlapping zone. The wind speed development and error propagation along the wind tracks are evaluated. Windy days (with Mistral and Tramontane) are distinguished from not windy days. A Bayesian Network is used to filter for days on which model sea level pressure fields show a Mistral/Tramontane pattern or not. Furthermore, time series of Mistral and Tramontane days events in historical and projection runs are derived from sea level pressure patterns. The development of Mistral and Tramontane days per year and the average length of such events are studied, as well as the development of wind speeds.
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and
Multiphonon scattering from surfaces
NASA Astrophysics Data System (ADS)
Manson, J. R.; Celli, V.; Himes, D.
1994-01-01
We consider the relationship between several different formalisms for treating the multiphonon inelastic scattering of atomic projectiles from surfaces. Starting from general principles of formal scattering theory, the trajectory approximation to the scattering intensity is obtained. From the trajectory approximation, the conditions leading to the fast-collision approximation for multiquantum inelastic scattering are systematically derived.
Parameterizing Aggregation Rates: Results of cold temperature ice-ash hydrometeor experiments
NASA Astrophysics Data System (ADS)
Courtland, L. M.; Dufek, J.; Mendez, J. S.; McAdams, J.
2014-12-01
Recent advances in the study of tephra aggregation have indicated that (i) far-field effects of tephra sedimentation are not adequately resolved without accounting for aggregation processes that preferentially remove the fine ash fraction of volcanic ejecta from the atmosphere as constituent pieces of larger particles, and (ii) the environmental conditions (e.g. humidity, temperature) prevalent in volcanic plumes may significantly alter the types of aggregation processes at work in different regions of the volcanic plume. The current research extends these findings to explore the role of ice-ash hydrometeor aggregation in various plume environments. Laboratory experiments utilizing an ice nucleation chamber allow us to parameterize tephra aggregation rates under the cold (0 to -50 C) conditions prevalent in the upper regions of volcanic plumes. We consider the interaction of ice-coated tephra of variable thickness grown in a controlled environment. The ice-ash hydrometers interact collisionally and the interaction is recorded by a number of instruments, including high speed video to determine if aggregation occurs. The electric charge on individual particles is examined before and after collision to examine the role of electrostatics in the aggregation process and to examine the charge exchange process. We are able to examine how sticking efficiency is related to both the relative abundance of ice on a particle as well as to the magnitude of the charge carried by the hydrometeor. We here present preliminary results of these experiments, the first to constrain aggregation efficiency of ice-ash hydrometeors, a parameter that will allow tephra dispersion models to use near-real-time meteorological data to better forecast particle residence time in the atmosphere.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
Parameterized cross sections for Coulomb dissociation in heavy-ion collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Cucinotta, F. A.; Townsend, L. W.; Badavi, F. F.
1988-01-01
Simple parameterizations of Coulomb dissociation cross sections for use in heavy-ion transport calculations are presented and compared to available experimental dissociation data. The agreement between calculation and experiment is satisfactory considering the simplicity of the calculations.
NASA Astrophysics Data System (ADS)
Genthon, C.; Le Treut, H.; Sadourny, R.; Jouzel, J.
1990-11-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a General Circulation Model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear approximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model
This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.
The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
An improved parameterization of electron heating, with application to an X17 flare
NASA Astrophysics Data System (ADS)
Smithtro, C. G.; Solomon, S. C.
2007-12-01
Ionospheric models typically rely on parameterizations to account for the effects of secondary ionization and heating by photoelectrons. These parameterizations rely on an assumed form for the input solar irradiance; however, during solar flares the shape of the ionizing spectrum can change dramatically. Solomon and Qian [2005] recently updated the parameterization of secondary ionization to account for spectral changes. In this work, we describe a similar improvement to the parameterization of electron heating. The new algorithm is included in a simple ionospheric model and applied to the X17 flare of 28 Oct 2003. With these changes the modeled electron temperature and neutral gas heating rate are shown to increase significantly over previous results. This has particular relevance to the calculation of flare-induced satellite drag.
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional
Parameterized spectral distributions for meson production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.
1995-01-01
Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Larson, V. E.; Gettelman, A.; Craig, C.; Goldhaber, S.; Schanen, D.
2013-12-01
Global climate models (GCMs) have long had trouble representing climate variability that is highly dependent on convective variability. Convective clouds operate on scales far too small to actually simulate on a large GCM grid. To rectify these issues, GCM development is moving in several directions simultaneously. While much work is focusing on improved convective parameterizations, some modelers are increasing resolution to the point where deep convective clouds can be resolved on the grid scale. Others are using a super-parameterized approach, where small-scale models are embedded within the large-scale grid. Our study utilizes a new approach to modeling convective variability that attempts to model coupled convective and microphysics processes more explicitly than traditional parameterizations. Using the new Community Atmosphere Model (CAM) subcolumn framework, we create several instances of local cloudy or clear air profiles within the large-scale GCM grid. Each sub-column is instantiated through Latin-Hypercube sampling of double-gaussian PDFs predicted by a higher-order closure cloud parameterization known as CLUBB (Cloud Layers Unified By Binormals). The CAM microphysics code then runs with each instance, and the resulting heat and moisture tendencies are averaged and returned to the GCM in the same way as traditional parameterizations. Here, we present results from single-column simulations of CAM using this sub-column approach to coupling the moist turbulence parameterization to the microphysics scheme.
Parameterization and Monte Carlo solutions to PDF evolution equations
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Schüler, Lennart; Attinger, Sabine; Knabner, Peter
2015-04-01
The probability density function (PDF) of the chemical species concentrations transported in random environments is governed by unclosed evolution equations. The PDF is transported in the physical space by drift and diffusion processes described by coefficients derived by standard upscaling procedures. Its transport in the concentration space is described by a drift determined by reaction rates, in a closed form, as well as a term accounting for the sub-grid mixing process due to molecular diffusion and local scale hydrodynamic dispersion. Sub-grid mixing processes are usually described by models of the conditionally averaged diffusion flux or models of the conditional dissipation rate. We show that in certain situations mixing terms can also be derived, in the form of an Itô process, from simulated or measured concentration time series. Monte Carlo solutions to PDF evolution equations are usually constructed with systems of computational particles, which are well suited for highly dimensional advection-dominated problems. Such solutions require the fulfillment of specific consistency conditions relating the statistics of the random concentration field, function of both space and time, to that of the time random function describing an Itô process in physical and concentration spaces which governs the evolution of the system of particles. We show that the solution of the Fokker-Planck equation for the concentration-position PDF of the Itô process coincides with the solution of the PDF equation only for constant density flows in spatially statistically homogeneous systems. We also find that the solution of the Fokker-Planck equation is still equivalent to the solution of the PDF equation weighted by the variable density or by other conserved scalars. We illustrate the parameterization of the sub-grid mixing by time series and the Monte Carlo solution for a problem of contaminant transport in groundwater. The evolution of the system of computational particles whose
Selection and parameterization of cortical neurons for neuroprosthetic control
NASA Astrophysics Data System (ADS)
Wahnoun, Remy; He, Jiping; Helms Tillery, Stephen I.
2006-06-01
When designing neuroprosthetic interfaces for motor function, it is crucial to have a system that can extract reliable information from available neural signals and produce an output suitable for real life applications. Systems designed to date have relied on establishing a relationship between neural discharge patterns in motor cortical areas and limb movement, an approach not suitable for patients who require such implants but who are unable to provide proper motor behavior to initially tune the system. We describe here a method that allows rapid tuning of a population vector-based system for neural control without arm movements. We trained highly motivated primates to observe a 3D center-out task as the computer played it very slowly. Based on only 10-12 s of neuronal activity observed in M1 and PMd, we generated an initial mapping between neural activity and device motion that the animal could successfully use for neuroprosthetic control. Subsequent tunings of the parameters led to improvements in control, but the initial selection of neurons and estimated preferred direction for those cells remained stable throughout the remainder of the day. Using this system, we have observed that the contribution of individual neurons to the overall control of the system is very heterogeneous. We thus derived a novel measure of unit quality and an indexing scheme that allowed us to rate each neuron's contribution to the overall control. In offline tests, we found that fewer than half of the units made positive contributions to the performance. We tested this experimentally by having the animals control the neuroprosthetic system using only the 20 best neurons. We found that performance in this case was better than when the entire set of available neurons was used. Based on these results, we believe that, with careful task design, it is feasible to parameterize control systems without any overt behaviors and that subsequent control system design will be enhanced with
Mantle Dynamics Studied with Parameterized Prescription From Mineral Physics Database
NASA Astrophysics Data System (ADS)
Tosi, N.; Yuen, D.; Wentzcovich, R.; deKoker, N.
2012-04-01
The incorporation of important thermodynamic and transport properties into mantle convection models has taken a long time for the community to appreciate, even though it was first spurred by the high-pressure experimental work at Mainz a quarter of a century ago and the experimental work at Bayreuth and St. Louis. The two quantities whose effects have yet to be widely appreciated are thermal expansivity α and thermal conductivity k, which are shown to impact mantle dynamics and thermal history in more ways than geoscientists have previously imagined. We have constructed simple parameterization schemes, which are cast analytically for describing α and k over a wide range of temperatures and pressures corresponding to the Earth's mantle. This approach employs the thermodynamics data set drawn from the VLAB at the University of Minnesota based on first-principles density functional theory [1] and also recent laboratory data from the Bayreuth group [2]. Using analytical formulae to determine α and k increases the computational speed of the convection code with respect to employing pre-calculated look-up tables and allows us to sweep out a wide parameter space. Our results, which also incorporate temperature and pressure dependent viscosity show the following prominent features: 1) The temperature-dependence of α is important in the upper mantle. It enhances strongly the rising hot plumes and inhibits the cold downwellings, thus making subduction more difficult for young slabs. 2) The pressure dependence of α is dominant in the lower mantle. It focuses upwellings and speeds them up during their upward rise. 3) The temperature-dependence of the thermal conductivity helps to homogenize the lateral thermal anomalies in cold downwellings and helps to maintain the heat in the upwellings, thus, in concert with alpha, helps to encourage fast hot plumes. 4) The lattice thermal conductivity of post-perovskite plays an important role in heat-transfer in the lower mantle and
NASA Astrophysics Data System (ADS)
Wróbel, Iwona; Piskozub, Jacek
2016-04-01
Wind speed has a disproportionate role in the forming of the climate as well it is important part in calculate of the air-sea interaction thanks which we can study climate change. It influences on mass, momentum and energy fluxes and the standard way of parametrizing those fluxes is use this variable. However, the very functions used to calculate fluxes from winds have evolved over time and still have large differences (especially in the case of aerosol sources function). As we have shown last year at the EGU conference (PICO presentation EGU2015-11206-1) and in recent public article (OSD 12,C1262-C1264,2015) there is a lot of uncertainties in the case of air-sea CO2 fluxes. In this study we calculated regional and global mass and momentum fluxes based on several wind speed climatologies. To do this we use wind speed from satellite data in FluxEngine software created within OceanFlux GHG Evolution project. Our main area of interest is European Arctic because of the interesting air-sea interaction physics (six-monthly cycle, strong wind and ice cover) but because of better data coverage we have chosen the North Atlantic as a study region to make it possible to compare the calculated fluxes to measured ones. An additional reason was the importance of the area for the North Hemisphere climate, and especially for Europe. The study is related to an ESA funded OceanFlux GHG Evolution project and is meant to be part of a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). We have used a modified version FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) for calculating trace gas fluxes to derive two purely wind driven (at least in the simplified form used in their parameterizations) fluxes. The modifications included removing gas transfer velocity formula from the toolset and replacing it with the respective formulas for momentum transfer and mass (aerosol production
Factors Influencing Light Scattering in the Eye
NASA Astrophysics Data System (ADS)
Ikaunieks, G.; Ozolinsh, M.; Stepanovs, A.; Lejiete, V.; Reva, N.
2009-01-01
Our vision in the twilight or dark is strongly affected by the intraocular light scattering (straylight). Of especial importance is to assess this phenomenon in view of the night driving. The authors have studied the spectral dependence of retinal stray-light and estimated the possibility to reduce it with yellow filters and small apertures. For the measurements the direct compensation flicker method was used. The results show that this spectral dependence is close to Rayleigh's scattering (∝λ-4). As could be expected from the known data, the yellow filter should reduce retinal straylight, especially for blue light. However, in the experiments this scattering was not removed with such a filter but instead slightly increased. The optical apertures reduced light scattering in the eye, especially for red color.
... this page: //medlineplus.gov/ency/article/002933.htm Thyroid gland removal To use the sharing features on this page, please enable JavaScript. Thyroid gland removal is surgery to remove all or ...
Gallbladder removal - laparoscopic
Laparoscopic gallbladder removal is surgery to remove the gallbladder using a medical device called a laparoscope. ... lets the doctor see inside your belly. Gallbladder removal surgery is done while you are under general ...
DESIGN MANUAL: PHOSPHORUS REMOVAL
This manual summarizes process design information for the best developed methods for removing phosphorus from wastewater. his manual discusses several proven phosphorus removal methods, including phosphorus removal obtainable through biological activity as well as chemical precip...
Gao, Weigang; Wesely, M.L.
1994-01-01
The removal of gaseous substances from the atmosphere by dry deposition represents an important sink in the atmospheric budget for many trace gases. The surface removal rate, therefore, needs be described quantitatively in modeling atmospheric transport and chemistry with regional- and global-scale models. Because the uptake capability of a terrestrial surface is strongly influenced by the type and condition of its vegetation, the seasonal and spatial changes in vegetation should be described in considerable detail in large-scale models. The objective of the present study is to develop a model that links remote sensing data from satellites with the RADM dry deposition module to provide a parameterization of dry deposition over large scales with improved temporal and spatial coverage. This paper briefly discusses the modeling methods and initial results obtained by applying the improved dry deposition module to a tallgrass prairie, for which measurements of O{sub 3} dry deposition and simultaneously obtained satellite remote sensing data are available.
NASA Astrophysics Data System (ADS)
Zhang, Yang; McMurry, Peter H.; Yu, Fangqun; Jacobson, Mark Z.
2010-10-01
Large uncertainty exists in the nucleation parameterizations that may be propagated into climate change predictions through affecting aerosol direct and indirect effects. These parameterizations are derived either empirically from laboratory/field measurements or from theoretical models for nucleation rates. A total of 12 nucleation parameterizations (7 binary, 3 ternary, and 2 power laws) that are currently used in three-dimensional air quality models are examined comparatively under a variety of atmospheric conditions from polluted surface to very clean mesosphere environments and evaluated using observations from several laboratory experiments and a field campaign conducted in a sulfate-rich urban environment in the southeastern United States (i.e., Atlanta, Georgia). Significant differences (by up to 18 orders of magnitude) are found among the nucleation rates calculated with different parameterizations under the same meteorological and chemical conditions. All parameterizations give nucleation rates that increase with the number concentrations of sulfuric acid but differ in terms of the magnitude of such increases. Differences exist in their dependencies on temperatures, relative humidity, and the mixing ratios of ammonia in terms of both trends and magnitudes. Among the 12 parameterizations tested, the parameterizations of Kuang et al. (2008), Sihto et al. (2006), and Harrington and Kreidenweis (1998) give the best agreement with the observed nucleation rates in most laboratory studies and in Atlanta during a summer season field campaign and either do not exceed or rarely exceed the upper limits of the nucleation rates (i.e., the dimer formation rate) and new particle formation rates (i.e., the formation rate of particles with 2 nm diameter). They are thus the most plausible nucleation parameterizations for applications in the planetary boundary layer of polluted sulfate-rich urban areas. Limitation with the two power laws are that they were derived
A new SERS: scattering enhanced Raman scattering
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Yakovlev, Vladislav V.
2014-03-01
Raman spectroscopy is a powerful technique that can be used to obtain detailed chemical information about a system without the need for chemical markers. It has been widely used for a variety of applications such as cancer diagnosis and material characterization. However, Raman scattering is a highly inefficient process, where only one in 1011 scattered photons carry the needed information. Several methods have been developed to enhance this inherently weak effect, including surface enhanced Raman scattering and coherent anti-Stokes Raman scattering. These techniques suffer from drawbacks limiting their commercial use, such as the need for spatial localization of target molecules to a `hot spot', or the need for complex laser systems. Here, we present a simple instrument to enhance spontaneous Raman scattering using elastic light scattering. Elastic scattering is used to substantially increase the interaction volume. Provided that the scattering medium exhibits very low absorption in the spectral range of interest, a large enhancement factor can be attained in a simple and inexpensive setting. In our experiments, we demonstrate an enhancement of 107 in Raman signal intensity. The proposed novel device is equally applicable for analyzing solids, liquids, and gases.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
SCAP. Point Kernel Single or Albedo Scatter
Disney, R.K.; Bevan, S.E.
1982-08-05
SCAP solves for radiation transport in complex geometries using the single or albedo-scatter point kernel method. The program is designed to calculate the neutron or gamma-ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user-specified discrete scattering volume. The geometry is described by zones bounded by intersecting quadratic surfaces with an arbitrary maximum number of boundary surfaces per zone. The anisotropic point sources are described as point-wise energy dependent distributions of polar angles on a meridian; isotropic point sources may be specified also. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a buildup factor approximation to account for multiple scatter on the scatter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as distributions of isotropic point sources, with uncollided line-of-sight attenuation and buildup calculated between each source point and the detector point.
Turbulence parameterizations for the random displacement method (RDM) version of ADPIC
Nasstrom, J.S.
1995-05-01
This document describes the algorithms that are used in the new random displacement method (RDM) option in the ADPIC model to parameterize atmospheric boundary layer turbulence through an eddy diffusivity, K. Both the new RDM version and previous gradient version of ADPIC use eddy diffusivities, and, as before, several parameterization options are available. The options used in the RDM are similar to the options for the existing Gradient method in ADPIC, but with some changes. Preferred parameterizations are based on boundary layer turbulence scaling parameters and measured turbulent velocity statistics. Simpler parameterizations, based solely on Pasquill stability class, are also available. When eddy diffusivities are based on boundary layer turbulence scaling parameters (i.e., u, h, z and L ), {open_quotes}turbulence parameterization{close_quotes} is an appropriate term. In other cases, this term is used loosely to describe {open_quotes}sigma curves{close_quotes}. These are semi-empirical relationships between the standard deviations, {sigma}z(x) and {sigma}y(x), of concentration from a point source and downwind distance. Separate sigma curves are used for each of six Pasquill stability classes, which are used to categorize the diffusive properties of the atmospheric surface layer. Consequently, sigma curves are more than parameterizations of turbulence since they also prescribe the final concentration distribution (for a point source) given a Pasquill stability class. In the ADPIC model, sigma curves can be used to calculate the eddy diffusivities, K{sub Z} and K{sub H}. Thus, they can be used to {open_quotes}back out{close_quotes} parameterizations for K which are consistent with the dispersion associated with the particular sigma curve. This results in eddy diffusivities which are spatially homogeneous, but travel time dependent.
NASA Astrophysics Data System (ADS)
Zhang, Jicai; Lu, Xianqing; Wang, Ping; Wang, Ya Ping
2011-04-01
Data assimilation technique (adjoint method) is applied to study the similarities and the differences between the Ekman (linear) and the Quadratic (nonlinear) bottom friction parameterizations for a two-dimensional tidal model. Two methods are used to treat the bottom friction coefficient (BFC). The first method assumes that the BFC is a constant in the entire computation domain, while the second applies the spatially varying BFCs. The adjoint expressions for the linear and the nonlinear parameterizations and the optimization formulae for the two BFC methods are derived based on the typical Largrangian multiplier method. By assimilating the model-generated 'observations', identical twin experiments are performed to test and validate the inversion ability of the presented methodology. Four experiments, which employ the linear parameterization, the nonlinear parameterizations, the constant BFC and the spatially varying BFC, are carried out to simulate the M 2 tide in the Bohai Sea and the Yellow Sea by assimilating the TOPEX/Poseidon altimetry and tidal gauge data. After the assimilation, the misfit between model-produced and observed data is significantly decreased in the four experiments. The simulation results indicate that the nonlinear Quadratic parameterization is more accurate than the linear Ekman parameterization if the traditional constant BFC is used. However, when the spatially varying BFCs are used, the differences between the Ekman and the Quadratic approaches diminished, the reason of which is analyzed from the viewpoint of dissipation rate caused by bottom friction. Generally speaking, linear bottom friction parameterizations are often used in global tidal models. This study indicates that they are also applicable in regional ocean tidal models with the combination of spatially varying parameters and the adjoint method.
ERIC Educational Resources Information Center
di Francia, Giuliano Toraldo
1973-01-01
The art of deriving information about an object from the radiation it scatters was once limited to visible light. Now due to new techniques, much of the modern physical science research utilizes radiation scattering. (DF)
A Fast Radiative Transfer Parameterization Under Cloudy Condition in Solar Spectral Region
NASA Astrophysics Data System (ADS)
Yang, Q.; Liu, X.; Yang, P.; Wang, C.
2014-12-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) system, which is proposed and developed by NASA, will directly measure the Earth's thermal infrared spectrum (IR), the spectrum of solar radiation reflected by the Earth and its atmosphere (RS), and radio occultation (RO). IR, RS, and RO measurements provide information on the most critical but least understood climate forcings, responses, and feedbacks associated with the vertical distribution of atmospheric temperature and water vapor, broadband reflected and emitted radiative fluxes, cloud properties, surface albedo, and surface skin temperature. To perform Observing System Simulation Experiments (OSSE) for long term climate observations, accurate and fast radiative transfer models are needed. The principal component-based radiative transfer model (PCRTM) is one of the efforts devoted to the development of fast radiative transfer models for simulating radiances and reflecatance observed by various hyperspectral instruments. Retrieval algorithm based on PCRTM forward model has been developed for AIRS, NAST, IASI, and CrIS. It is very fast and very accurate relative to the training radiative transfer model. In this work, we are extending PCRTM to UV-VIS-near IR spectral region. To implement faster cloudy radiative transfer calculations, we carefully investigated the radiative transfer process under cloud condition. The cloud bidirectional reflectance was parameterized based on off-line 36-stream multiple scattering calculations while few other lookup tables were generated to describe the effective transmittance and reflectance of the cloud-clear-sky coupling system in solar spectral region. The bidirectional reflectance or the irradiance measured by satellite may be calculated using a simple fast radiative transfer model providing the type of cloud (ice or water), optical depth of the cloud, optical depth of both atmospheric trace gases above and below clouds, particle size of the cloud, as well
NASA Astrophysics Data System (ADS)
Wiston, Modise; McFiggans, Gordon; Schultz, David
2015-04-01
In this study, we perform a simulation of the spatial distributions of particle and gas concentrations from a significantly large source of pollution event during a dry season in southern Africa and their interactions with cloud processes. Specific focus is on the extent to which cloud-aerosol interactions are affected by various inputs (i.e. emissions) and parameterizations and feedback mechanisms in a coupled mesoscale chemistry-meteorology model -herein Weather Research and Forecasting model with chemistry (WRF-Chem). The southern African dry season (May-Sep) is characterised by biomass burning (BB) type of pollution. During this period, BB particles are frequently observed over the subcontinent, at the same time a persistent deck of stratocumulus covers the south West African coast, favouring long-range transport over the Atlantic Ocean of aerosols above clouds. While anthropogenic pollutants tend to spread more over the entire domain, biomass pollutants are concentrated around the burning areas, especially the savannah and tropical rainforest of the Congo Basin. BB is linked to agricultural practice at latitudes south of 10° N. During an intense burning event, there is a clear signal of strong interactions of aerosols and cloud microphysics. These species interfere with the radiative budget, and directly affect the amount of solar radiation reflected and scattered back to space and partly absorbed by the atmosphere. Aerosols also affect cloud microphysics by acting as cloud condensation nuclei (CCN), modifying precipitation pattern and the cloud albedo. Key area is to understand the role of pollution on convective cloud processes and its impacts on cloud dynamics. The hypothesis is that an environment of potentially high pollution enables the probability of interactions between co-located aerosols and cloud layers. To investigate this hypothesis, we outline an approach to integrate three elements: i) focusing on regime(s) where there are strong indications of
Parameterized signal calibration for NMR cryoporometry experiment without external standard
NASA Astrophysics Data System (ADS)
Stoch, Grzegorz; Krzyżak, Artur T.
2016-08-01
In cryoporometric experiments non-linear effects associated with the sample and the probehead bring unwanted contributions to the total signal along with the change of temperature. The elimination of these influences often occurs with the help of an intermediate measurement of a separate liquid sample. In this paper we suggest an alternative approach under certain assumptions, solely based on data from the target experiment. In order to obtain calibration parameters the method uses all of these raw data points. Its reliability is therefore enhanced as compared to other methods based on lesser number of data points. Presented approach is automatically valid for desired temperature range. The need for intermediate measurement is removed and parameters for such a calibration are naturally adapted to the individual sample-probehead combination.
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
Albedo of coastal landfast sea ice in Prydz Bay, Antarctica: Observations and parameterization
NASA Astrophysics Data System (ADS)
Yang, Qinghua; Liu, Jiping; Leppäranta, Matti; Sun, Qizhen; Li, Rongbin; Zhang, Lin; Jung, Thomas; Lei, Ruibo; Zhang, Zhanhai; Li, Ming; Zhao, Jiechen; Cheng, Jingjing
2016-05-01
The snow/sea-ice albedo was measured over coastal landfast sea ice in Prydz Bay, East Antarctica (off Zhongshan Station) during the austral spring and summer of 2010 and 2011. The variation of the observed albedo was a combination of a gradual seasonal transition from spring to summer and abrupt changes resulting from synoptic events, including snowfall, blowing snow, and overcast skies. The measured albedo ranged from 0.94 over thick fresh snow to 0.36 over melting sea ice. It was found that snow thickness was the most important factor influencing the albedo variation, while synoptic events and overcast skies could increase the albedo by about 0.18 and 0.06, respectively. The in-situ measured albedo and related physical parameters (e.g., snow thickness, ice thickness, surface temperature, and air temperature) were then used to evaluate four different snow/ice albedo parameterizations used in a variety of climate models. The parameterized albedos showed substantial discrepancies compared to the observed albedo, particularly during the summer melt period, even though more complex parameterizations yielded more realistic variations than simple ones. A modified parameterization was developed, which further considered synoptic events, cloud cover, and the local landfast sea-ice surface characteristics. The resulting parameterized albedo showed very good agreement with the observed albedo.
Organic aerosol volatility parameterizations and their impact on atmospheric composition and climate
NASA Astrophysics Data System (ADS)
Tsigaridis, K.; Bauer, S.
2015-12-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions. Here we will present a thorough study of the most popular organic aerosol parameterizations with regard to volatility in global models, studied within the same host global model, the GISS ModelE2: primary and secondary organic aerosols both being non-volatile, secondary organic aerosols semi-volatile (2-product model), and all organic aerosols semi-volatile (volatility-basis set). We will also present results on the role aerosol microphysical calculations play on organic aerosol concentrations. The changes in aerosol distribution as a result of the different parameterizations, together with their role on gas-phase chemistry and climate, will be presented.
Mesoscale modeling of optical turbulence (C2n) utilizing a novel physically-based parameterization
NASA Astrophysics Data System (ADS)
He, Ping; Basu, Sukanta
2015-09-01
In this paper, we propose a novel parameterization for optical turbulence (C2n) simulations in the atmosphere. In this approach, C2n is calculated from the output of atmospheric models using a high-order turbulence closure scheme. An important feature of this parameterization is that, in the free atmosphere (i.e., above the boundary layer), it is consistent with a well-established C2n formulation by Tatarskii. Furthermore, it approaches a Monin-Obukhov similarity-based relationship in the surface layer. To test the performance of the proposed parameterization, we conduct mesoscale modeling and compare the simulated C2n values with those measured during two field campaigns over the Hawaii island. A popular regression-based approach proposed by Trinquet and Vernin (2007) is also used for comparison. The predicted C2n values, obtained from both the physically and statistically-based parameterizations, agree reasonably well with the observational data. However, in the presence of a large-scale atmospheric phenomenon (a breaking mountain wave), the physically-based parameterization outperforms the statistically-based one.
The Measurement and Parameterization of Effective Radius of Droplets in Warm Stratocumulus Clouds.
NASA Astrophysics Data System (ADS)
Martin, G. M.; Johnson, D. W.; Spice, A.
1994-07-01
Observations from the Meteorological Research Flight's Hercules C-130 aircraft of the microphysical characteristics of warm stratocumulus clouds have been analyzed to investigate the variation of the effective radius of cloud droplets in layer clouds. Results from experiments in the eastern Pacific, South Atlantic, subtropical regions of the North Atlantic, and the sea areas around the British Isles are presented. In situations where entrainment effects are small the (effective radius)3 is found to be a linear function of the (volume-averaged radius)3 in a given cloud and can thus be parameterized with respect to the liquid water content and the droplet number concentration in the cloud. However, the shape of the droplet size spectrum is very dependent on the cloud condensation nuclei (CCN) characteristics below cloud base, and the relationship between effective radius and volume-averaged radius varies between maritime air masses and continental air masses. This study also details comparisons that have been made in stratocumulus between the droplet number concentrations and (a) aerosol concentrations below cloud base in the size range 0.1 to 3.0 m and (b) CCN supersaturation spectra in the boundary layer. A parameterization relating droplet concentration and aerosol concentration is suggested. The effects of nonadiabatic processes on the parameterization of effective radius are discussed. Drizzle is found to have little effect near cloud top, but in precipitating stratocumulus clouds the parameterization breaks down near cloud base. Comparisons are made between this parameterization of effective radius and others used currently or in the past.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan
2012-06-11
We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.
A generalized grid connectivity-based parameterization for subsurface flow model calibration
NASA Astrophysics Data System (ADS)
Bhark, Eric W.; Jafarpour, Behnam; Datta-Gupta, Akhil
2011-06-01
We develop a novel method of parameterization for spatial hydraulic property characterization to mitigate the challenges associated with the nonlinear inverse problem of subsurface flow model calibration. The parameterization is performed by the projection of the estimable hydraulic property field onto an orthonormal basis derived from the grid connectivity structure. The basis functions represent the modal shapes or harmonics of the grid, are defined by a modal frequency, and converge to special cases of the discrete Fourier series under certain grid geometries and boundary assumptions; therefore, hydraulic property updates are performed in the spectral domain and merge with Fourier analysis in ideal cases. Dependence on the grid alone implies that the basis may characterize any grid geometry, including corner point and unstructured, is model independent, and is constructed off-line and only once prior to flow data assimilation. We apply the parameterization in an adaptive multiscale model calibration workflow for three subsurface flow models. Several different grid geometries are considered. In each case the prior hydraulic property model is updated using a parameterized multiplier field that is superimposed onto the grid and assigned an initial value of unity at each cell. The special case corresponding to a constant multiplier is always applied through the constant basis function. Higher modes are adaptively employed during minimization of data misfit to resolve multiscale heterogeneity in the geomodel. The parameterization demonstrates selective updating of heterogeneity at locations and spatial scales sensitive to the available data, otherwise leaving the prior model unchanged as desired.
NASA Astrophysics Data System (ADS)
Reid, Jeffrey S.; Brooks, Barbara; Crahan, Katie K.; Hegg, Dean A.; Eck, Thomas F.; O'Neill, Norm; de Leeuw, Gerrit; Reid, Elizabeth A.; Anderson, Kenneth D.
2006-01-01
In August/September of 2001, the R/P FLIP and CIRPAS Twin Otter research aircraft were deployed to the eastern coast of Oahu, Hawaii, as part of the Rough Evaporation Duct (RED) experiment. Goals included the study of the air/sea exchange, turbulence, and sea-salt aerosol particle characteristics at the subtropical marine Pacific site. Here we examine coarse mode particle size distributions. Similar to what has been shown for airborne dust, optical particle counters such as the Forward Scattering Spectrometer Probe (FSSP), Classical Scattering Aerosol Spectrometer Probe (CSASP) and the Cloud Aerosol Spectrometer (CAS) within the Cloud Aerosol and Precipitation Spectrometer (CAPS) instrument systematically overestimate particle size, and consequently volume, for sea salt particles. Ground-based aerodynamic particle sizers (APS) and AERONET inversions yield much more reasonable results. A wing pod mounted APS gave mixed results and may not be appropriate for marine boundary layer studies. Relating our findings to previous studies does much to explain the bulk of the differences in the literature and leads us to conclude that the largest uncertainty facing flux and airborne cloud/aerosol interaction studies is likely due to the instrumentation itself. To our knowledge, there does not exist an in situ aircraft system that adequately measures the ambient volume distribution of coarse mode sea salt particles. Most empirically based sea salt flux parameterizations can trace their heritage to a clearly biased measurement technique. The current "state of the art" in this field prevents any true form of clear sky radiative "closure" for clean marine environments.
Cloud forcing in Arctic polynyas: Climatology, parameterization, and modeling
NASA Astrophysics Data System (ADS)
Key, Erica
Cloud and radiation data gathered in four polynyas across the Western Arctic span a decade of extreme environmental variability that culminated in the furthest retreat of sea ice cover on satellite record. These polynyas, oases of open water within the pack ice, are areas of intense surface exchange and serve as small-scale natural models of all active polar processes. Each of the studied polynyas is uniquely forced and maintained, resulting in an ensemble which representatively samples pan-Arctic variability. Cloud amount in each polynya, as analyzed to WMO standards by a meteorologist from time-lapse imagery collected using a hemispheric mirror, exceeded previous observational estimates of 80%. Calculations of surface cloud radiative forcing point to Arctic clouds' tendency toward scattering incoming shortwave radiation over re-emission of radiation in the longwave from cloud base. Sensitivity of this cloud forcing to variations in albedo, aerosol loading, and cloud microphysics, calculated with a polar-optimized radiative transfer model, indicate that small changes in snow and ice cover elicit stronger responses than heavy aerosol loading, changing particle effective radius, or liquid water content, especially at small solar zenith angles. Results obtained locally within polynyas are given regional relevance through the use of CASPR (Cloud and Surface Parameter Retrieval) algorithms and AVHRR Polar Pathfinder data.
NASA Astrophysics Data System (ADS)
Croft, B.; Lohmann, U.; Martin, R. V.; Stier, P.; Wurzler, S.; Feichter, J.; Hoose, C.; Heikkilä, U.; van Donkelaar, A.; Ferrachat, S.
2010-02-01
A diagnostic cloud nucleation scavenging scheme, which determines stratiform cloud scavenging ratios for both aerosol mass and number distributions, based on cloud droplet, and ice crystal number concentrations, is introduced into the ECHAM5-HAM global climate model. This scheme is coupled with a size-dependent in-cloud impaction scavenging parameterization for both cloud droplet-aerosol, and ice crystal-aerosol collisions. The aerosol mass scavenged in stratiform clouds is found to be primarily (>90%) scavenged by cloud nucleation processes for all aerosol species, except for dust (50%). The aerosol number scavenged is primarily (>90%) attributed to impaction. 99% of this impaction scavenging occurs in clouds with temperatures less than 273 K. Sensitivity studies are presented, which compare aerosol concentrations, burdens, and deposition for a variety of in-cloud scavenging approaches: prescribed fractions, a more computationally expensive prognostic aerosol cloud processing treatment, and the new diagnostic scheme, also with modified assumptions about in-cloud impaction and nucleation scavenging. Our results show that while uncertainties in the representation of in-cloud scavenging processes can lead to differences in the range of 20-30% for the predicted annual, global mean aerosol mass burdens, and near to 50% for accumulation mode aerosol number burden, the differences in predicted aerosol mass concentrations can be up to one order of magnitude, particularly for regions of the middle troposphere with temperatures below 273 K where mixed and ice phase clouds exist. Different parameterizations for impaction scavenging changed the predicted global, annual mean number removal attributed to ice clouds by seven-fold, and the global, annual dust mass removal attributed to impaction by two orders of magnitude. Closer agreement with observations of black carbon profiles from aircraft (increases near to one order of magnitude for mixed phase clouds), mid
NASA Astrophysics Data System (ADS)
Landry, Guillaume; Seco, Joao; Gaudreault, Mathieu; Verhaegen, Frank
2013-10-01
spectral method. The tissue substitutes were well fitted by the TSM with R2 = 0.9930. Residuals on Zeff for the phantoms were similar between the TSM and spectral methods for Zeff < 8 while they were improved by the TSM for higher Zeff. The RTM fitted the reference tissue dataset well with R2 = 0.9999. Comparing the Zeff extracted from TSM and the more complex RTM to the known values from the reference tissue dataset yielded errors of up to 0.3 and 0.15 units of Zeff respectively. The parameterization approach yielded standard deviations which were up to 0.3 units of Zeff higher than those observed with the spectral method for Zeff around 7.5. Procedures for the DECT estimation of Zeff removing the need for estimates of the CT scanner spectra have been presented. Both the TSM and the more complex RTM performed better than the spectral method. The RTM yielded the best results for the reference human tissue dataset reducing errors from up to 0.3 to 0.15 units of Zeff compared to the simpler TSM. Both TSM and RTM are simpler to implement than the spectral method which requires estimates of the CT scanner spectra.
Landry, Guillaume; Seco, Joao; Gaudreault, Mathieu; Verhaegen, Frank
2013-10-01
tissue substitutes were well fitted by the TSM with R(2) = 0.9930. Residuals on Zeff for the phantoms were similar between the TSM and spectral methods for Zeff < 8 while they were improved by the TSM for higher Zeff. The RTM fitted the reference tissue dataset well with R(2) = 0.9999. Comparing the Zeff extracted from TSM and the more complex RTM to the known values from the reference tissue dataset yielded errors of up to 0.3 and 0.15 units of Zeff respectively. The parameterization approach yielded standard deviations which were up to 0.3 units of Zeff higher than those observed with the spectral method for Zeff around 7.5. Procedures for the DECT estimation of Zeff removing the need for estimates of the CT scanner spectra have been presented. Both the TSM and the more complex RTM performed better than the spectral method. The RTM yielded the best results for the reference human tissue dataset reducing errors from up to 0.3 to 0.15 units of Zeff compared to the simpler TSM. Both TSM and RTM are simpler to implement than the spectral method which requires estimates of the CT scanner spectra. PMID:24025623
Electromagnetic inverse scattering
NASA Technical Reports Server (NTRS)
Bojarski, N. N.
1972-01-01
A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.
Explicit cloud-top entrainment parameterization in the global climate model ECHAM5-HAM
NASA Astrophysics Data System (ADS)
Siegenthaler-Le Drian, C.; Spichtinger, P.; Lohmann, U.
2011-01-01
New developments in the turbulence parameterization in the general circulation model ECHAM5-HAM are presented. They consist mainly of an explicit entrainment closure at the top of stratocumulus-capped boundary layers and the addition of an explicit contribution of the radiative divergence in the buoyancy production term. The impact of the new implementations on a single column model study and on the global scale is presented here. The parameterization has a "smoothing" effect: the abnormally high values of turbulence kinetic energy are reduced, both in the single column and in the Californian stratocumulus region. A sensitivity study with prescribed droplet concentration shows a reduction in the sensitivity of liquid water path to increasing cloud aerosol optical depth. We also study the effect of the new implementation on a Pacific cross-section. The entrainment parameterization leads to an enhanced triggering of the convective activity.
NASA Astrophysics Data System (ADS)
Zhang, G. P.; Fenicia, F.; Rientjes, T. H. M.; Reggiani, P.; Savenije, H. H. G.
A rainfall-runoff model has been developed for the Geer river catchment based on the Representative Elementary Watershed (REW) approach. The approach takes into account five dominant hydrological processes. The entire river catchment is descritized into a finite number of sub-catchments, or REWs. To describe these processes, five flow zones within each REW are distinguished. Within each zone, averaged values for state variables and model parameters are used. In this research, some new model parameterizations are introduced. A consistency analysis has been carried out with respect to the effects of the subsurface parameterization on runoff generation processes for the purpose of evaluating model behavior. In addition, a new approach for representing the relation between topography and the variable source area for the saturation overland flow within a REW is proposed. Results show that the improved model parameterization produces better simulations of river discharge and that the REW approach is an appropriate tool for investigating rainfall runoff relations.
A numerical method for parameterization of atmospheric chemistry - Computation of tropospheric OH
NASA Technical Reports Server (NTRS)
Spivakovsky, C. M.; Wofsy, S. C.; Prather, M. J.
1990-01-01
An efficient and stable computational scheme for parameterization of atmospheric chemistry is described. The 24-hour-average concentration of OH is represented as a set of high-order polynomials in variables such as temperature, densities of H2O, CO, O3, and NO(t) (defined as NO + NO2 + NO3 + 2N2O5 + HNO2 + HNO4) as well as variables determining solar irradiance: cloud cover, density of the overhead ozone column, surface albedo, latitude, and solar declination. This parameterization of OH chemistry was used in the three-dimensional study of global distribution of CH3CCl3. The proposed computational scheme can be used for parameterization of rates of chemical production and loss or of any other output of a full chemical model.
A review of recent research on improvement of physical parameterizations in the GLA GCM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
Developing a unified parameterization of diabatic heating for regional climate modeling simulations
NASA Astrophysics Data System (ADS)
Beltran-Przekurat, A. B.; Pielke, R. A., Sr.; Leoncini, G.; Gabriel, P.
2009-12-01
Conventionally, turbulence fluxes, short- and longwave radiative fluxes, and convective and stratiform cloud precipitation atmospheric processes are separately parameterized as a one-dimensional problem. Most of these physical effects occur at spatial scales too small to be explicitly resolved in the models. However, such a separation is not realistic as those processes are three-dimensional and interact with each other. Results from numerical weather prediction and climate models strongly suggest that subgrid-scale parameterizations represent a large source of model errors and sensitivities at a large computational cost. Improving the physical parameterizations and, in addition, reducing the fraction of the total computational time that they require is critical for improving the predictive skill of atmospheric models for both individual model realizations and for ensemble predictions. Our preliminary work presents a new methodology to incorporate parameterizations for use in atmospheric models. The effects of the parameterized physics on the diabatic heating and moistening/drying are incorporated into unified transfer functions, called Universal Look-Up Table (ULUT). The ULUT accepts as inputs the dependent variables and other information that are traditionally inserted into the parameterizations and produces the equivalent temperature and moisture changes that result from summing each parameterization. A similar concept using remotely-sensed data was proposed by Pielke Sr. et al. (2007) [Satellite-based model parameterization of diabatic heating. EOS, 88, 96-97].The major goal is to create a ULUT for the diabatic heating that would be able to reproduce the meteorological fields with the same accuracy as in the original model configuration but at a fraction of the cost. This effort is similar, although much broader in scope, to that of Leoncini et al. (2008, From model based parameterizations to Lookup Tables: An EOF approach. Wea. Forecasting, 23, 1127
Kogan, Y.L.; Kogan, Z.N.; Lilly, D.K.; Khairoutdinov, M.F.
1995-04-01
Stratocumulus clouds in the marine boundary layer exert a tremendous impact on the planetary radiation balance because of their persistence and large cover. Even small biases in the representation of their radiative parameters can produce large errors in the simulated planetary radiation balance. General circulation models (GCMs) and climate models most commonly use two parameterizations of cloud optical depth. The first employs as input parameters the climatological or in some other way averaged cloud droplet effective radius and liquid water path. The second concerns droplet concentration, mean droplet radius and cloud geometrical thickness. Both parameterizations are obtained from a general theoretical expression for cloud optical depth. This paper contrasts these two parameterizations with the general theoretical definition, using a set of cloud drop distribution functions generated by the CIMMS three-dimensional large-eddy simulation (LES) stratocumulus cloud microphysical model.
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
Organic Aerosol Volatility Parameterizations and Their Impact on Atmospheric Composition and Climate
NASA Technical Reports Server (NTRS)
Tsigaridis, Konsta; Bauer, Susanne E.
2015-01-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions.
Organic Aerosol Volatility Parameterizations and Their Impact on Atmospheric Composition and Climate
NASA Technical Reports Server (NTRS)
Tsigaridis, Kostas; Bauer, Susanne E.
2015-01-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions. Aerosol microphysics do not significantly alter the mean OA vertical profile or comparison with surface measurements. This might not be the case for semi-volatile OA with microphysics.
ARSENIC REMOVAL BY IRON REMOVAL PROCESSES
Presentation will discuss the removal of arsenic from drinking water using iron removal processes that include oxidation/filtration and the manganese greensand processes. Presentation includes results of U.S. EPA field studies conducted in Michigan and Ohio on existing iron remo...
Parameterization of Forest Canopies with the PROSAIL Model
NASA Astrophysics Data System (ADS)
Austerberry, M. J.; Grigsby, S.; Ustin, S.
2013-12-01
Particularly in forested environments, arboreal characteristics such as Leaf Area Index (LAI) and Leaf Inclination Angle have a large impact on the spectral characteristics of reflected radiation. The reflected spectrum can be measured directly with satellites or airborne instruments, including the MASTER and AVIRIS instruments. This particular project dealt with spectral analysis of reflected light as measured by AVIRIS compared to tree measurements taken from the ground. Chemical properties of leaves including pigment concentrations and moisture levels were also measured. The leaf data was combined with the chemical properties of three separate trees, and served as input data for a sequence of simulations with the PROSAIL Model, a combination of PROSPECT and Scattering by Arbitrarily Inclined Leaves (SAIL) simulations. The output was a computed reflectivity spectrum, which corresponded to the spectra that were directly measured by AVIRIS for the three trees' exact locations within a 34-meter pixel resolution. The input data that produced the best-correlating spectral output was then cross-referenced with LAI values that had been obtained through two entirely separate methods, NDVI extraction and use of the Beer-Lambert law with airborne LiDAR. Examination with regressive techniques between the measured and modeled spectra then enabled a determination of the trees' probable structure and leaf parameters. Highly-correlated spectral output corresponded well to specific values of LAI and Leaf Inclination Angle. Interestingly, it appears that varying Leaf Angle Distribution has little or no noticeable effect on the PROSAIL model. Not only is the effectiveness and accuracy of the PROSAIL model evaluated, but this project is a precursor to direct measurement of vegetative indices exclusively from airborne or satellite observation.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has
Engelmann spruce site index models: a comparison of model functions and parameterizations.
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce - Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike's Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Saa, Pedro; Nielsen, Lars K.
2015-01-01
Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of
Matching Pursuit with Asymmetric Functions for Signal Decomposition and Parameterization
Spustek, Tomasz; Jedrzejczak, Wiesław Wiktor; Blinowska, Katarzyna Joanna
2015-01-01
The method of adaptive approximations by Matching Pursuit makes it possible to decompose signals into basic components (called atoms). The approach relies on fitting, in an iterative way, functions from a large predefined set (called dictionary) to an analyzed signal. Usually, symmetric functions coming from the Gabor family (sine modulated Gaussian) are used. However Gabor functions may not be optimal in describing waveforms present in physiological and medical signals. Many biomedical signals contain asymmetric components, usually with a steep rise and slower decay. For the decomposition of this kind of signal we introduce a dictionary of functions of various degrees of asymmetry – from symmetric Gabor atoms to highly asymmetric waveforms. The application of this enriched dictionary to Otoacoustic Emissions and Steady-State Visually Evoked Potentials demonstrated the advantages of the proposed method. The approach provides more sparse representation, allows for correct determination of the latencies of the components and removes the "energy leakage" effect generated by symmetric waveforms that do not sufficiently match the structures of the analyzed signal. Additionally, we introduced a time-frequency-amplitude distribution that is more adequate for representation of asymmetric atoms than the conventional time-frequency-energy distribution. PMID:26115480
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin
2006-01-01
A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.
The role of polar regions in global climate, and a new parameterization of global heat transport
NASA Technical Reports Server (NTRS)
Lindzen, R. S.; Farrell, B.
1980-01-01
The effects of the transport of heat between polar regions and other latitudes on climate sensitivity and stability are examined within the framework of simple energy balance models. New heat transport parameterizations adjust radiative equilibrium distributions of temperature with latitude on the basis of Hadley cells and baroclinically unstable eddies; including the effects of static stability changes with latitude eliminates the possible error in estimating the pole-equator temperature difference. It is found that climate sensitivity and stability for the new transport parameterizations can differ from other models, and is capable of simulating the sensitivity required by existing climate data.
Dam removal increases American eel abundance in distant headwater streams
Hitt, Nathaniel P.; Eyler, Sheila; Wofford, John E.B.
2012-01-01
American eel Anguilla rostrata abundances have undergone significant declines over the last 50 years, and migration barriers have been recognized as a contributing cause. We evaluated eel abundances in headwater streams of Shenandoah National Park, Virginia, to compare sites before and after the removal of a large downstream dam in 2004 (Embrey Dam, Rappahannock River). Eel abundances in headwater streams increased significantly after the removal of Embrey Dam. Observed eel abundances after dam removal exceeded predictions derived from autoregressive models parameterized with data prior to dam removal. Mann–Kendall analyses also revealed consistent increases in eel abundances from 2004 to 2010 but inconsistent temporal trends before dam removal. Increasing eel numbers could not be attributed to changes in local physical habitat (i.e., mean stream depth or substrate size) or regional population dynamics (i.e., abundances in Maryland streams or Virginia estuaries). Dam removal was associated with decreasing minimum eel lengths in headwater streams, suggesting that the dam previously impeded migration of many small-bodied individuals (<300 mm TL). We hypothesize that restoring connectivity to headwater streams could increase eel population growth rates by increasing female eel numbers and fecundity. This study demonstrated that dams may influence eel abundances in headwater streams up to 150 river kilometers distant, and that dam removal may provide benefits for eel management and conservation at the landscape scale.
NASA Astrophysics Data System (ADS)
Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badełek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftáčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jachołkowska, A.; Janata, F.; Jancsó, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettinghale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schneider, A.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.
1987-09-01
The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4 20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the negative binomial distribution via Poissonian to the ordinary binomial is observed.
Parameterization and analysis of 3-D radiative transfer in clouds
Varnai, Tamas
2012-03-16
This report provides a summary of major accomplishments from the project. The project examines the impact of radiative interactions between neighboring atmospheric columns, for example clouds scattering extra sunlight toward nearby clear areas. While most current cloud models don't consider these interactions and instead treat sunlight in each atmospheric column separately, the resulting uncertainties have remained unknown. This project has provided the first estimates on the way average solar heating is affected by interactions between nearby columns. These estimates have been obtained by combining several years of cloud observations at three DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility sites (in Alaska, Oklahoma, and Papua New Guinea) with simulations of solar radiation around the observed clouds. The importance of radiative interactions between atmospheric columns was evaluated by contrasting simulations that included the interactions with those that did not. This study provides lower-bound estimates for radiative interactions: It cannot consider interactions in cross-wind direction, because it uses two-dimensional vertical cross-sections through clouds that were observed by instruments looking straight up as clouds drifted aloft. Data from new DOE scanning radars will allow future radiative studies to consider the full three-dimensional nature of radiative processes. The results reveal that two-dimensional radiative interactions increase overall day-and-night average solar heating by about 0.3, 1.2, and 4.1 Watts per meter square at the three sites, respectively. This increase grows further if one considers that most large-domain cloud simulations have resolutions that cannot specify small-scale cloud variability. For example, the increases in solar heating mentioned above roughly double for a fairly typical model resolution of 1 km. The study also examined the factors that shape radiative interactions between atmospheric columns and
NASA Astrophysics Data System (ADS)
Heeb, Peter; Tschanun, Wolfgang; Buser, Rudolf
2012-03-01
A comprehensive and completely parameterized model is proposed to determine the related electrical and mechanical dynamic system response of a voltage-driven capacitive coupled micromechanical switch. As an advantage over existing parameterized models, the model presented in this paper returns within few seconds all relevant system quantities necessary to design the desired switching cycle. Moreover, a sophisticated and detailed guideline is given on how to engineer a MEMS switch. An analytical approach is used throughout the modelling, providing representative coefficients in a set of two coupled time-dependent differential equations. This paper uses an equivalent mass moving along the axis of acceleration and a momentum absorption coefficient. The model describes all the energies transferred: the energy dissipated in the series resistor that models the signal attenuation of the bias line, the energy dissipated in the squeezed film, the stored energy in the series capacitor that represents a fixed separation in the bias line and stops the dc power in the event of a short circuit between the RF and dc path, the energy stored in the spring mechanism, and the energy absorbed by mechanical interaction at the switch contacts. Further, the model determines the electrical power fed back to the bias line. The calculated switching dynamics are confirmed by the electrical characterization of the developed RF switch. The fabricated RF switch performs well, in good agreement with the modelled data, showing a transition time of 7 µs followed by a sequence of bounces. Moreover, the scattering parameters exhibit an isolation in the off-state of >8 dB and an insertion loss in the on-state of <0.6 dB up to frequencies of 50 GHz. The presented model is intended to be integrated into standard circuit simulation software, allowing circuit engineers to design the switch bias line, to minimize induced currents and cross actuation, as well as to find the mechanical structure dimensions
NASA Astrophysics Data System (ADS)
Piskozub, Jacek; Wróbel, Iwona
2016-04-01
The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Ping; Liu, Xiao-Huan; Jacobson, Mark Z.; McMurry, Peter H.; Yu, Fangqun; Yu, Shaocai; Schere, Kenneth L.
2010-10-01
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster-activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality (CMAQ) modeling system version 4.4. The 12-28 June 1999 Southern Oxidants Study episode is selected as a testbed to evaluate simulated particulate matter (PM) number and size predictions of CMAQ with different nucleation parameterizations. The evaluation shows that simulated domain-wide maximum PM2.5 number concentrations with different nucleation parameterizations can vary by 3 orders of magnitude. All parameterizations overpredict (by a factor of 1.4 to 1.7) the total number concentrations of accumulation-mode PM and significantly underpredict (by factors of 1.3 to 65.7) those of Aitken-mode PM, resulting in a net underprediction (by factors of 1.3 to 13.7) of the total number concentrations of PM2.5 under a polluted urban environment at a downtown station in Atlanta. The predicted number concentrations for Aitken-mode PM at this site can vary by up to 3 orders of magnitude, and those for accumulation-mode PM can vary by up to a factor of 3.2, with the best predictions by the power law of Sihto et al. (2006) (NMB of -31.7%) and the worst predictions by the ternary nucleation parameterization of Merikanto et al. (2007) (NMB of -93.1%). The ternary nucleation parameterization of Napari et al. (2002) gives relatively good agreement with observations but for a wrong reason. The power law of Kuang et al. (2008) and the binary nucleation parameterization of Harrington and Kreidenweis (1998) give better agreement than the remaining parameterizations. All the parameterizations fail to reproduce the observed temporal variations of PM number, volume, and surface area concentrations. The significant variation in the performance of these parameterizations is caused by their different theoretical
Riley, David G.; Gill, Clare A.; Herring, Andy D.; Riggs, Penny K.; Sawyer, Jason E.; Sanders, James O.
2014-01-01
Gestation length, birth weight, and weaning weight of F2 Nelore-Angus calves (n = 737) with designed extensive full-sibling and half-sibling relatedness were evaluated for association with 34,957 SNP markers. In analyses of birth weight, random relatedness was modeled three ways: 1) none, 2) random animal, pedigree-based relationship matrix, or 3) random animal, genomic relationship matrix. Detected birth weight-SNP associations were 1,200, 735, and 31 for those parameterizations respectively; each additional model refinement removed associations that apparently were a result of the built-in stratification by relatedness. Subsequent analyses of gestation length and weaning weight modeled genomic relatedness; there were 40 and 26 trait-marker associations detected for those traits, respectively. Birth weight associations were on BTA14 except for a single marker on BTA5. Gestation length associations included 37 SNP on BTA21, 2 on BTA27 and one on BTA3. Weaning weight associations were on BTA14 except for a single marker on BTA10. Twenty-one SNP markers on BTA14 were detected in both birth and weaning weight analyses. PMID:25249774
NASA Astrophysics Data System (ADS)
Sun, J.; Fen, J.; Ungar, R. K.
2013-10-01
The life time of atmospheric aerosols is highly affected by in-cloud scavenging processes. Aerosol mass conversion from aerosols embedded in cloud droplets into aerosols embedded in raindrops is a pivotal pathway for wet removal of aerosols in clouds. The aerosol mass conversion rate in the bulk microphysics parameterizations is always assumed to be linearly related to the precipitation production rate, which includes the cloud water autoconversion rate and the cloud water accretion rate. The ratio of the aerosol mass concentration conversion rate to the cloud aerosol mass concentration has typically been considered to be the same as the ratio of the precipitation production rate to the cloud droplet mass concentration. However, the mass of an aerosol embedded in a cloud droplet is not linearly proportional to the mass of the cloud droplet. A simple linear relationship cannot be drawn between the precipitation production rate and the aerosol mass concentration conversion rate. In this paper, we studied the evolution of aerosol mass concentration conversion rates in a warm rain formation process with a 1.5-dimensional non-hydrostatic convective cloud and aerosol interaction model in the bin microphysics. We found that the ratio of the aerosol mass conversion rate to the cloud aerosol mass concentration can be statistically expressed by the ratio of the precipitation production rate to the cloud droplet mass concentration with an exponential function. We further gave some regression equations to determine aerosol conversions in the warm rain formation under different threshold radii of raindrops and different aerosol size distributions.
Anthony Prenni; Kreidenweis, Sonia M.
2012-09-28
Clouds play an important role in weather and climate. In addition to their key role in the hydrologic cycle, clouds scatter incoming solar radiation and trap infrared radiation from the surface and lower atmosphere. Despite their importance, feedbacks involving clouds remain as one of the largest sources of uncertainty in climate models. To better simulate cloud processes requires better characterization of cloud microphysical processes, which can affect the spatial extent, optical depth and lifetime of clouds. To this end, we developed a new parameterization to be used in numerical models that describes the variation of ice nuclei (IN) number concentrations active to form ice crystals in mixed-phase (water droplets and ice crystals co-existing) cloud conditions as these depend on existing aerosol properties and temperature. The parameterization is based on data collected using the Colorado State University continuous flow diffusion chamber in aircraft and ground-based campaigns over a 14-year period, including data from the DOE-supported Mixed-Phase Arctic Cloud Experiment. The resulting relationship is shown to more accurately represent the variability of ice nuclei distributions in the atmosphere compared to currently used parameterizations based on temperature alone. When implemented in one global climate model, the new parameterization predicted more realistic annually averaged cloud water and ice distributions, and cloud radiative properties, especially for sensitive higher latitude mixed-phase cloud regions. As a test of the new global IN scheme, it was compared to independent data collected during the 2008 DOE-sponsored Indirect and Semi-Direct Aerosol Campaign (ISDAC). Good agreement with this new data set suggests the broad applicability of the new scheme for describing general (non-chemically specific) aerosol influences on IN number concentrations feeding mixed-phase Arctic stratus clouds. Finally, the parameterization was implemented into a regional
Phenol removal pretreatment process
Hames, Bonnie R.
2004-04-13
A process for removing phenols from an aqueous solution is provided, which comprises the steps of contacting a mixture comprising the solution and a metal oxide, forming a phenol metal oxide complex, and removing the complex from the mixture.
Cleaner coating removal technologies are developing rapidly to meet a variety of industrial needs to replace solvent strippers having toxic properties. his guide describes cleaner technologies that can be used to reduce waste in coating removal operations. nformation is presented...
... Remover Panscol Paplex Ultra PediaPatch Sal-Acid Sal-Plant Salacid Salactic Film Trans-Plantar Trans-Ver-Sal Vergo Verukan Viranol Wart Remover Other products may also contain salicylates and other acids.
Multiple scattering tomography.
Modregger, Peter; Kagias, Matias; Peter, Silvia; Abis, Matteo; Guzenko, Vitaliy A; David, Christian; Stampanoni, Marco
2014-07-11
Multiple scattering represents a challenge for numerous modern tomographic imaging techniques. In this Letter, we derive an appropriate line integral that allows for the tomographic reconstruction of angular resolved scattering distributions, even in the presence of multiple scattering. The line integral is applicable to a wide range of imaging techniques utilizing various kinds of probes. Here, we use x-ray grating interferometry to experimentally validate the framework and to demonstrate additional structural sensitivity, which exemplifies the impact of multiple scattering tomography. PMID:25062159
Environment scattering in GADRAS.
Thoreson, Gregory G.; Mitchell, Dean James; Theisen, Lisa Anne; Harding, Lee T.
2013-09-01
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Green's Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
Weakly supervised glasses removal
NASA Astrophysics Data System (ADS)
Wang, Zhicheng; Zhou, Yisu; Wen, Lijie
2015-03-01
Glasses removal is an important task on face recognition, in this paper, we provide a weakly supervised method to remove eyeglasses from an input face image automatically. We choose sparse coding as face reconstruction method, and optical flow to find exact shape of glasses. We combine the two processes iteratively to remove glasses more accurately. The experimental results reveal that our method works much better than these algorithms alone, and it can remove various glasses to obtain natural looking glassless facial images.
Presentation will discuss the state-of-art technology for removal of arsenic from drinking water. Presentation includes results of several EPA field studies on removal of arsenic from existing arsenic removal plants and key results from several EPA sponsored research studies. T...
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D.
2012-01-01
This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…
Energy-dependent parameterization of heavy-ion absorption cross sections
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.
1986-01-01
An energy-dependent parameterization of the total absorption (reaction) cross sections for heavy ion (Z equal to or greater than 2) collisions at energies above 25 MeV per nucleon is presented. The formula will be especially useful in heavy-ion transport applications.
Computer program for parameterization of nucleus-nucleus electromagnetic dissociation cross sections
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.; Badavi, Forooz F.
1988-01-01
A computer subroutine parameterization of electromagnetic dissociation cross sections for nucleus-nucleus collisions is presented that is suitable for implementation in a heavy ion transport code. The only inputs required are the projectile kinetic energy and the projectile and target charge and mass numbers.
SPACS: A semi-empirical parameterization for isotopic spallation cross sections
NASA Astrophysics Data System (ADS)
Schmitt, C.; Schmidt, K.-H.; Kelić-Heil, A.
2014-12-01
A new semi-empirical parameterization for residue cross sections in spallation reactions is presented. The prescription named SPACS, for spallation cross sections, permits calculating the fragment production in proton- and neutron-induced collisions with light up to heavy non-fissile partners from the Fermi regime to ultra-relativistic energies. The model is fully analytical, based on a new parameterization of the mass yields, accounting for the dependence on bombarding energy. The formalism for the isobaric distribution consists of a commonly used functional form, borrowed from the empirical parameterization of fragmentation cross sections EPAX, with the observed suited adjustments for spallation, and extended to the charge-pickup channel. Structural and even-odd staggering related to the last stage of the primary-residue deexcitation process is additionally explicitly introduced with a new prescription. Calculations are benchmarked with recent data collected at GSI, Darmstadt as well as with previous measurements employing various techniques. The dependences observed experimentally on collision energy, reaction-partner mass, and proton-neutron asymmetry are well described. A fast analytical parameterization, such as SPACS, can be relevant to be implemented in complex simulations as used for practical issues at nuclear facilities and plants. Its predictive power also makes it useful for cross-section estimates in astrophysics and biophysics.
A Comparison of Cumulus Parameterizations in Idealized Sea-Breeze Simulations
NASA Technical Reports Server (NTRS)
Cohen, Charles; Arnold, James E. (Technical Monitor)
2001-01-01
Four cumulus parameterizations in the Penn State-NCAR model MM5 are compared in idealized sea-breeze simulations, with the aim of discovering why they work as they do. The most realistic results appear to be those using the Kain-Fritsch scheme. Rainfall is significantly delayed with the Betts-Miller-Janjic scheme, due to the method of computing the reference sounding. This method can be corrected, but downdrafts should be added in a physically realistic manner. Even without downdrafts, a corrected version of the BMJ scheme produces nearly the same timing and location of deep convection as the KF scheme, despite the very different physics. In order to simulate the correct timing of the rainfall, a minimum amount of mass is required in the layer that is the source of a parameterized updraft. The Grell parameterization, in the present simulation, always derives the updraft from the top of the mixed layer, where vertical advection predominates over horizontal advection in increasing the moist static energy. This makes the application of the quasi-equilibrium closure more correct than it would be if the updrafts were always derived from the most unstable layer, but it evades the question of whether or not horizontal advection generates instability. Using different physics, the parameterizations produce significantly different cloud-top heights.
Parameterization of bedform morphology and defect density with fingerprint analysis techniques
NASA Astrophysics Data System (ADS)
Skarke, Adam; Trembanis, Arthur C.
2011-10-01
A novel method for parameterizing the morphology of seafloor ripples with fingerprint analysis numerical techniques is presented. This fully automated analysis tool identifies rippled areas in two-dimensional imagery of the seafloor, and returns ripple orientation and wavelength as well as a new morphological parameter, the spatial density of ripple defects. In contrast to widely used manual and spectral parameterization methods, this new technique yields a unique probability distribution for each derived parameter, which describes its spatial variability across the sampled domain. Here we apply this new analysis technique to synthetic and field collected side-scan sonar seafloor images in order to assess the methods capacity to define bed geometry across a wide range of simulated and observed morphological conditions. The resulting orientation and wavelength values compare favorably with those of the existing manual and spectral parameterization methods, and are superior under environmental conditions characterized by low signal to noise ratios as well as high planform ripple sinuosity. Furthermore, the resulting ripple defect density values demonstrate correlation with ripple orientation, wave direction, and the Shields parameter, which is consistent with recent investigations that have theoretically linked this parameter to hydrodynamic forcing conditions. The presented fingerprint analysis method surpasses the capacity of existing methods for ripple parameterization and promises to yield greater insight into theoretical and applied problems associated with the temporal and spatial variability of ripple morphology across a wide spectrum of marine environments.
Effects of cumulus parameterizations on predictions of summer flood in the Central United States
NASA Astrophysics Data System (ADS)
Qiao, Fengxue; Liang, Xin-Zhong
2015-08-01
This study comprehensively evaluates the effects of twelve cumulus parameterization (CUP) schemes on simulations of 1993 and 2008 Central US summer floods using the regional climate-weather research and forecasting model. The CUP schemes have distinct skills in predicting the summer mean pattern, daily rainfall frequency and precipitation diurnal cycle. Most CUP schemes largely underestimate the magnitude of Central US floods, but three schemes including the ensemble cumulus parameterization (ECP), the Grell-3 ensemble cumulus parameterization (G3) and Zhang-McFarlane-Liang cumulus parameterization (ZML) show clear advantages over others in reproducing both floods location and amount. In particular, the ECP scheme with the moisture convergence closure over land and cloud-base vertical velocity closure over oceans not only reduces the wet biases in the G3 and ZML schemes along the US coastal oceans, but also accurately reproduces the Central US daily precipitation variation and frequency distribution. The Grell (GR) scheme shows superiority in reproducing the Central US nocturnal rainfall maxima, but others generally fail. This advantage of GR scheme is primarily due to its closure assumption in which the convection is determined by the tendency of large-scale instability. Future study will attempt to incorporate the large-scale tendency assumption as a trigger function in the ECP scheme to improve its prediction of Central US rainfall diurnal cycle.
Parameterization of the inertial gravity waves and generation of the quasi-biennial oscillation
NASA Astrophysics Data System (ADS)
Xue, X.-H.; Liu, H.-L.; Dou, X.-K.
2012-03-01
In this work we extend the gravity wave parameterization scheme currently used in the Whole Atmosphere Community Climate Model (WACCM), which is based upon Lindzen's linear saturation theory, by including the Coriolis effect to better describe the inertia-gravity waves (IGW). We perform WACCM simulations to study the generation of equatorial oscillations of the zonal mean zonal winds by including a spectrum of IGWs, and the parametric dependence of the wind oscillation on the IGWs and the effect of the new scheme. These simulations demonstrate that the parameterized IGW forcing from the standard and the new scheme are both capable of generating equatorial wind oscillations with a downward phase progression in the stratosphere using the standard spatial resolution settings in the current model. The period of the oscillation is dependent on the strength of the IGW forcing, and the magnitude of the oscillation is dependent on the width of the wave spectrum. The new parameterization affects the wave breaking level and acceleration rates mainly through changing the critical level. The quasi-biennial oscillations (QBO) can be internally generated with the proper selection of the parameters of the scheme. The characteristics of the wind oscillations thus generated are compared with the observed QBO. These experiments demonstrate the need to parameterize IGWs for generating the QBO in General Circulation Models (GCMs).
ERIC Educational Resources Information Center
Alku, Paavo; Vilkman, Erkki; Laukkanen, Anne-Maria
1998-01-01
A new method is presented for the parameterization of glottal volume velocity waveforms that have been estimated by inverse filtering acoustic speech pressure signals. The new technique combines two features of voice production: the AC value and the spectral decay of the glottal flow. Testing found the new parameter correlates strongly with the…
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...
NASA Astrophysics Data System (ADS)
Zhang, M.; Zou, W.; Chen, T.; Kim, L.; Khan, A.; Haffty, B.; Yue, N. J.
2014-01-01
A common approach to implementing the Monte Carlo method for the calculation of brachytherapy radiation dose deposition is to use a phase space file containing information on particles emitted from a brachytherapy source. However, the loading of the phase space file during the dose calculation consumes a large amount of computer random access memory, imposing a higher requirement for computer hardware. In this study, we propose a method to parameterize the information (e.g., particle location, direction and energy) stored in the phase space file by using several probability distributions. This method was implemented for dose calculations of a commercial Ir-192 high dose rate source. Dose calculation accuracy of the parameterized source was compared to the results observed using the full phase space file in a simple water phantom and in a clinical breast cancer case. The results showed the parameterized source at a size of 200 kB was as accurate as the phase space file represented source of 1.1 GB. By using the parameterized source representation, a compact Monte Carlo job can be designed, which allows an easy setup for parallel computing in brachytherapy planning.
PARAMETERIZATION OF SUBSURFACE HEATING FOR SOIL AND CONCRETE USING NET RADIATION DATA
The variability of surface sensible heat flux depend strongly on the rate of heating of the underlying surfaces. The variability is expected to be large in urban areas where the surfaces are layered with a variety of man-made materials. Parameterization of the ground heat storage...
Evans, J.L.; Frank, W.M.; Young, G.S.
1996-04-01
Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...
Parameterization of the GPFARM-Range model for simulating rangeland productivity
Technology Transfer Automated Retrieval System (TEKTRAN)
One of the major limitations to rangeland model usage is the lack of parameter values appropriate for reliable simulations at different locations and times. In this chapter we seek to show how the GPFARM-Range, a rangeland model, which has been previously parameterized, tested and validated for the ...
Lidar measurements of the atmospheric boundary layer height, the entrainment zone, wind speed and direction, ancillary temperature profiles and surface flux data were used to test current parameterized entrainment models of mixed layer growth rate. Six case studies under clear ai...
Creating a parameterized model of a CMOS transistor with a gate of enclosed layout
NASA Astrophysics Data System (ADS)
Vinogradov, S. M.; Atkin, E. V.; Ivanov, P. Y.
2016-02-01
The method of creating a parameterized spice model of an N-channel transistor with a gate of enclosed layout is considered. Formulas and examples of engineering calculations for use of models in the computer-aided Design environment of Cadence Vitruoso are presented. Calculations are made for the CMOS technology with 180 nm design rules of the UMC.
Ab initio parameterization of YFF1, a universal force field for drug-design applications.
Yakovenko, Olexandr Ya; Li, Yvonne Y; Oliferenko, Alexander A; Vashchenko, Ganna M; Bdzhola, Volodymyr G; Jones, Steven J M
2012-02-01
The YFF1 is a new universal molecular mechanic force field designed for drug discovery purposes. The electrostatic part of YFF1 has already been parameterized to reproduce ab initio calculated dipole and quadrupole moments. Now we report a parameterization of the van der Waals interactions (vdW) for the same atom types that were previously defined. The 6-12 Lennard-Jones potential terms were parameterized against homodimerization energies calculated at the MP2/6-31 G level of theory. The Boys-Bernardi counterpoise correction was employed to account for the basis-set superposition error. As a source of structural information we used about 2,400 neutral compounds from the ZINC2007 database. About 6,600 homodimeric configurations were generated from this dataset. A special "closure" procedure was designed to accelerate the parameters fitting. As a result, dimerization energies of small organic compounds are reproduced with an average unsigned error of 1.1 kcal mol(-1). Although the primary goal of this work was to parameterize nonbonded interactions, bonded parameters were also derived, by fitting to PM6 semiempirically optimized geometries of approximately 20,000 compounds. PMID:21562826
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Exploring Mechanisms of Biofilm Removal
Sahni, Karan; Khashai, Fatemeh; Forghany, Ali; Krasieva, Tatiana; Wilder-Smith, Petra
2016-01-01
exposed to air/water spray alone showed some disruption of the biofilm, leaving residual patches of biofilm that varied considerably in size. Test agent dip treatment followed by air/water spray broke up the continuous layer of biofilm leaving only very small, thin scattered islands of biofilm. Finally, the dynamic test agent spray followed by air/water spray removed the biofilm almost entirely, with evidence of only very few small, thin residual biofilm islands. Conclusion These studies demonstrate that test agent desiccant effect alone causes some disruption of dental biofilm. Additional dynamic rinsing is needed to achieve complete removal of dental biofilm. PMID:27413588
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare
NASA Astrophysics Data System (ADS)
Xie, Xiaoning; Liu, Xiaodong
2013-04-01
Autoconversion process is an important bridge between aerosols, clouds, and precipitation, in that the change of the cloud microphysical properties by aerosols could influence the spatial and temporal changes of the surface precipitation, as well as the total precipitation amount. Three types of autoconversion parameterization are considered in our study including the Kessler scheme (Kessler, 1969), the KK scheme (Khairoutdinov and Kogan, 2000), and the Dispersion scheme (Liu et al., 2005). The Kessler scheme doesn't consider aerosol indirect effect and the KK scheme can study the aerosol indirect effect; while the Dispersion scheme can both consider the aerosol indirect effect and the influence of cloud droplet spectral dispersion. In this study, the aerosol effects on clouds and precipitation in mesoscale convective systems are investigated using the Weather Research and Forecast model (WRF) with the Morrison two-moment bulk microphysics scheme. Considering the different types of the autoconversion parameterization schemes including the Kessler scheme, the KK scheme, and the Dispersion scheme, a suite of sensitivity experiments are performed using an initial sounding data of the deep convective cloud system on 31 March 2005 in Beijing under different aerosol concentrations (varying from 50 cm-3 to 10000 cm-3). Numerical experiments in this study show that the aerosol induced precipitation change is strongly dependent on the autoconversion parameterization. For the Kessler scheme, the average cumulative precipitation is enhanced slightly with increasing aerosols. In the meantime, precipitation is reduced significantly with increasing aerosols for the KK scheme. The surface precipitation varies nonmonotonically for the Dispersion scheme, increasing with aerosols at lower concentration, while decreasing at higher concentration. These distinct trends in aerosol-induced precipitation are mainly due to the rain water content change under the different autoconversion
Sensitivity of the recent methane budget to LMDz sub-grid-scale physical parameterizations
NASA Astrophysics Data System (ADS)
Locatelli, R.; Bousquet, P.; Saunois, M.; Chevallier, F.; Cressot, C.
2015-09-01
With the densification of surface observing networks and the development of remote sensing of greenhouse gases from space, estimations of methane (CH4) sources and sinks by inverse modeling are gaining additional constraining data but facing new challenges. The chemical transport model (CTM) linking the flux space to methane mixing ratio space must be able to represent these different types of atmospheric constraints for providing consistent flux estimations. Here we quantify the impact of sub-grid-scale physical parameterization errors on the global methane budget inferred by inverse modeling. We use the same inversion setup but different physical parameterizations within one CTM. Two different schemes for vertical diffusion, two others for deep convection, and one additional for thermals in the planetary boundary layer (PBL) are tested. Different atmospheric methane data sets are used as constraints (surface observations or satellite retrievals). At the global scale, methane emissions differ, on average, from 4.1 Tg CH4 per year due to the use of different sub-grid-scale parameterizations. Inversions using satellite total-column mixing ratios retrieved by GOSAT are less impacted, at the global scale, by errors in physical parameterizations. Focusing on large-scale atmospheric transport, we show that inversions using the deep convection scheme of Emanuel (1991) derive smaller interhemispheric gradients in methane emissions, indicating a slower interhemispheric exchange. At regional scale, the use of different sub-grid-scale parameterizations induces uncertainties ranging from 1.2 % (2.7 %) to 9.4 % (14.2 %) of methane emissions when using only surface measurements from a background (or an extended) surface network. Moreover, spatial distribution of methane emissions at regional scale can be very different, depending on both the physical parameterizations used for the modeling of the atmospheric transport and the observation data sets used to constrain the inverse
NASA Astrophysics Data System (ADS)
Aronson, E. L.; Helliker, B. R.; Strode, S. A.; Pawson, S.
2011-12-01
Global soil methane consumption was estimated using multiple regression-based parameterizations by vegetation type from a meta-dataset created from 780 published methane flux measurements. The average global estimates for soil consumption by extrapolation, without taking snow cover into account, totaled 54-60 Tg annually. The parameterizations were based on air temperature and precipitation output variables reported in the literature and gathered in the meta-dataset. These variables were matched to similar ones reported in the Goddard Earth Observing System (GEOS) global climate model. The methane uptake response to increasing precipitation and temperature varied between vegetation types. The parameterizations for methane fluxes by vegetation type were included in a 20 year, free-running, tagged-methane run of the GEOS-5 model constrained by real observations of sea surface temperature. Snow cover was assumed to block methane diffusion into the soil and therefore result in zero consumption of methane in snow-covered soils. The parameterization estimates was slightly higher than previous estimates of global methane consumption, at around 37 Tg annually. The resultant global surface methane concentration was then compared to observed methane concentrations from NOAA Global Monitoring Division sites worldwide, with varying agreement. The parameterization for the vegetation type "Needleleaf Trees" predicted methane consumption in a study site located in the NJ Pinelands, which was studied in 2009. The estimate of methane consumption by the vegetation type "Broadleaf Evergreen Trees" was found to have the greatest error, which may indicate that the factors on which the parameterization was based are of minor importance in regulating methane flux within this vegetation type. The results were compared to offline runs of the parameterizations without the snow-cover compensation, which resulted in global rates of almost double the methane consumption. Since there have been
Technology Transfer Automated Retrieval System (TEKTRAN)
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
NASA Astrophysics Data System (ADS)
Titos, G.; Cazorla, A.; Zieger, P.; Andrews, E.; Lyamani, H.; Granados-Muñoz, M. J.; Olmo, F. J.; Alados-Arboledas, L.
2016-09-01
Knowledge of the scattering enhancement factor, f(RH), is important for an accurate description of direct aerosol radiative forcing. This factor is defined as the ratio between the scattering coefficient at enhanced relative humidity, RH, to a reference (dry) scattering coefficient. Here, we review the different experimental designs used to measure the scattering coefficient at dry and humidified conditions as well as the procedures followed to analyze the measurements. Several empirical parameterizations for the relationship between f(RH) and RH have been proposed in the literature. These parameterizations have been reviewed and tested using experimental data representative of different hygroscopic growth behavior and a new parameterization is presented. The potential sources of error in f(RH) are discussed. A Monte Carlo method is used to investigate the overall measurement uncertainty, which is found to be around 20-40% for moderately hygroscopic aerosols. The main factors contributing to this uncertainty are the uncertainty in RH measurement, the dry reference state and the nephelometer uncertainty. A literature survey of nephelometry-based f(RH) measurements is presented as a function of aerosol type. In general, the highest f(RH) values were measured in clean marine environments, with pollution having a major influence on f(RH). Dust aerosol tended to have the lowest reported hygroscopicity of any of the aerosol types studied. Major open questions and suggestions for future research priorities are outlined.
Effect of physical parameterization schemes on track and intensity of cyclone LAILA using WRF model
NASA Astrophysics Data System (ADS)
Kanase, Radhika D.; Salvekar, P. S.
2015-08-01
The objective of the present study is to investigate in detail the sensitivity of cumulus parameterization (CP), planetary boundary layer (PBL) parameterization, microphysics parameterization (MP) on the numerical simulation of severe cyclone LAILA over Bay of Bengal using Weather Research & Forecasting (WRF) model. The initial and boundary conditions are supplied from GFS data of 1° × 1° resolution and the model is integrated in three `twoway' interactive nested domains at resolutions of 60 km, 20 km and 6.6 km. Total three sets of experiments are performed. First set of experiments include sensitivity of Cumulus Parameterization (CP) schemes, while second and third set of experiments is carried out to check the sensitivity of different PBL and Microphysics Parameterization (MP) schemes. The fourth set contains initial condition sensitivity experiments. For first three sets of experiments, 0000 UTC 17 May 2010 is used as initial condition. In CP sensitivity experiments, the track and intensity is well simulated by Betts-Miller-Janjic (BMJ) schemes. The track and intensity of LAILA is very sensitive to the representation of large scale environmental flow in CP scheme as well as to the initial vertical wind shear values. The intensity of the cyclone is well simulated by YSU scheme and it depends upon the mixing treatment in and above PBL. Concentration of frozen hydrometeors, such as graupel in WSM6 MP scheme and latent heat released during auto conversion of hydrometeors may be responsible for storm intensity. An additional set of experiments with different initial vortex intensity shows that, small differences in the initial wind fields have profound impact on both track and intensity of the cyclone. The representation of the mid-tropospheric heating in WSM6 is mainly controlled by amount of graupel hydrometeor and thus might be one of the possible causes in modulating the storm's intensity.
Parameterized reduced order models from a single mesh using hyper-dual numbers
NASA Astrophysics Data System (ADS)
Brake, M. R. W.; Fike, J. A.; Topping, S. D.
2016-06-01
In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.
Graphitic packing removal tool
Meyers, K.E.; Kolsun, G.J.
1997-11-11
Graphitic packing removal tools for removal of the seal rings in one piece are disclosed. The packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal. 5 figs.
Graphitic packing removal tool
Meyers, Kurt Edward; Kolsun, George J.
1997-01-01
Graphitic packing removal tools for removal of the seal rings in one piece. he packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal.
Graphitic packing removal tool
Meyers, K.E.; Kolsun, G.J.
1996-12-31
Graphitic packing removal tools are described for removal of the seal rings in one piece from valves and pumps. The packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal.
Limitations in scatter propagation
NASA Astrophysics Data System (ADS)
Lampert, E. W.
1982-04-01
A short description of the main scatter propagation mechanisms is presented; troposcatter, meteor burst communication and chaff scatter. For these propagation modes, in particular for troposcatter, the important specific limitations discussed are: link budget and resulting hardware consequences, diversity, mobility, information transfer and intermodulation and intersymbol interference, frequency range and future extension in frequency range for troposcatter, and compatibility with other services (EMC).
Hartemann, F V
2008-12-01
An overview of linear and nonlinear Compton scattering is presented, along with a comparison with Thomson scattering. Two distinct processes play important roles in the nonlinear regime: multi-photon interactions, leading to the generation of harmonics, and radiation pressure, yielding a downshift of the radiated spectral features. These mechanisms, their influence on the source brightness, and different modeling strategies are also briefly discussed.
Berger, E.L.; Collins, J.C.; Soper, D.E.; Sterman, G.
1986-03-01
I discuss events in high energy hadron collisions that contain a hard scattering, in the sense that very heavy quarks or high P/sub T/ jets are produced, yet are diffractive, in the sense that one of the incident hadrons is scattered with only a small energy loss. 8 refs.
Mismatch removal via coherent spatial relations
NASA Astrophysics Data System (ADS)
Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen
2014-07-01
We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-24
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convectivemore » cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-24
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it is
Sensitivity of the recent methane budget to LMDz sub-grid scale physical parameterizations
NASA Astrophysics Data System (ADS)
Locatelli, R.; Bousquet, P.; Saunois, M.; Chevallier, F.; Cressot, C.
2015-04-01
With the densification of surface observing networks and the development of remote sensing of greenhouse gases from space, estimations of methane (CH4) sources and sinks by inverse modelling face new challenges. Indeed, the chemical transport model used to link the flux space with the mixing ratio space must be able to represent these different types of constraints for providing consistent flux estimations. Here we quantify the impact of sub-grid scale physical parameterization errors on the global methane budget inferred by inverse modelling using the same inversion set-up but different physical parameterizations within one chemical-transport model. Two different schemes for vertical diffusion, two others for deep convection, and one additional for thermals in the planetary boundary layer are tested. Different atmospheric methane datasets are used as constraints (surface observations or satellite retrievals). At the global scale, methane emissions differ, on average, from 4.1 Tg CH4 per year due to the use of different sub-grid scale parameterizations. Inversions using satellite total-column retrieved by GOSAT satellite are less impacted, at the global scale, by errors in physical parameterizations. Focusing on large-scale atmospheric transport, we show that inversions using the deep convection scheme of Emanuel (1991) derive smaller interhemispheric gradient in methane emissions. At regional scale, the use of different sub-grid scale parameterizations induces uncertainties ranging from 1.2 (2.7%) to 9.4% (14.2%) of methane emissions in Africa and Eurasia Boreal respectively when using only surface measurements from the background (extended) surface network. When using only satellite data, we show that the small biases found in inversions using GOSAT-CH4 data and a coarser version of the transport model were actually masking a poor representation of the stratosphere-troposphere gradient in the model. Improving the stratosphere-troposphere gradient reveals a larger
Impact of model structure and parameterization on Penman-Monteith type evaporation models
NASA Astrophysics Data System (ADS)
Ershadi, A.; McCabe, M. F.; Evans, J. P.; Wood, E. F.
2015-06-01
The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf
Evaluation and Improvement of the Turbulence Parameterization in Deep Convective Clouds.
NASA Astrophysics Data System (ADS)
Ricard, D.; Verrelle, A.; Lac, C.
2015-12-01
Although turbulence processes have been extensively studied for the boundary layer, there are few studies that evaluate the turbulence parameterization inside convective clouds in atmospheric models. Yet, turbulence can be strong inside cumulus and cumulonimbus and can have an impact on the structure and dynamics of these clouds. This study aims at evaluating and improving the parameterization of subgrid turbulence in deep convective clouds simulated by numerical cloud resolving model at kilometer scale. First, we have characterized the turbulence representation in deep convective clouds. For that, a Large-Eddy Simulation (LES) using simplified atmospheric conditions has been performed with the Meso-NH model to serve as a reference simulation of deep convection. This LES with a 50-m grid spacing is used to compute the turbulent fluxes at different coarser horizontal resolutions (500 m, 1 km, and 2 km). Vertical turbulent fluxes of liquid water potential temperature and non-precipitating total water mixing ratio have counter-gradient structures, indicative of nonlocal turbulence. Second, a diagnostic assessment, from the reference fields, of the current turbulence parameterization of the Meso-NH model (subgrid scheme with a 1.5-order closure and diagnostic equations for the fluxes) at these coarser resolutions shows that turbulent kinetic energy is largely underestimated in the clouds, related to an underestimation of thermal production. The counter-gradient structures of vertical turbulent fluxes are not reproduced, indeed, the local K-gradient formulation is not suitable. Alternative parameterizations of some turbulent fluxes, proposed in the literature, are then tested. In particular, a parameterization based on horizontal gradients, gives a better representation of the thermal production of turbulence in the clouds, with a good representation of counter-gradient areas. Third, the on-line evaluation from model runs with 2-km, 1-km, and 500-m horizontal grid
NASA Astrophysics Data System (ADS)
Xia, X.; Che, H.; Zhu, J.; Chen, H.; Cong, Z.; Deng, X.; Fan, X.; Fu, Y.; Goloub, P.; Jiang, H.; Liu, Q.; Mai, B.; Wang, P.; Wu, Y.; Zhang, J.; Zhang, R.; Zhang, X.
2016-01-01
Spatio-temporal variation of aerosol optical properties and aerosol direct radiative effects (ADRE) are studied based on high quality aerosol data at 21 sunphotometer stations with at least 4-months worth of measurements in China mainland and Hong Kong. A parameterization is proposed to describe the relationship of ADREs to aerosol optical depth at 550 nm (AOD) and single scattering albedo at 550 nm (SSA). In the middle-east and south China, the maximum AOD is always observed in the burning season, indicating a significant contribution of biomass burning to AOD. Dust aerosols contribute to AOD significantly in spring and their influence decreases from the source regions to the downwind regions. The occurrence frequencies of background level AOD (AOD < 0.10) in the middle-east, south and northwest China are very limited (0.4%, 1.3% and 2.8%, respectively). However, it is 15.7% in north China. Atmosphere is pristine in the Tibetan Plateau where 92.0% of AODs are <0.10. Regional mean SSAs at 550 nm are 0.89-0.90, although SSAs show substantial site and season dependence. ADREs at the top and bottom of the atmosphere for solar zenith angle of 60 ± 5° are -16--37 W m-2 and -66--111 W m-2, respectively. ADRE efficiency shows slight regional dependence. AOD and SSA together account for more than 94 and 87% of ADRE variability at the bottom and top of the atmosphere. The overall picture of ADRE in China is that aerosols cool the climate system, reduce surface solar radiation and heat the atmosphere.
Calculates Thermal Neutron Scattering Kernel.
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
A discrete variable representation for electron-hydrogen atom scattering
NASA Astrophysics Data System (ADS)
Gaucher, Lionel Francis
1994-08-01
A discrete variable representation (DVR) suitable for treating the quantum scattering of a low energy electron from a hydrogen atom is presented. The benefits of DVR techniques (e.g. the removal of the requirement of calculating multidimensional potential energy matrix elements and the availability of iterative sparse matrix diagonalization/inversion algorithms) have for many years been applied successfully to studies of quantum molecular scattering. Unfortunately, the presence of a Coulomb singularity at the electrically unshielded center of a hydrogen atom requires high radial grid point densities in this region of the scattering coordinate, while the presence of finite kinetic energy in the asymptotic scattering electron also requires a sufficiently large radial grid point density at moderate distances from the nucleus. The constraints imposed by these two length scales have made application of current DVR methods to this scattering event difficult.
NASA Astrophysics Data System (ADS)
Kubo, S.; Nishiura, M.; Tanaka, K.; Moseev, D.; Ogasawara, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Tsujimura, T. I.; Makino, R.
2016-06-01
High-power gyrotrons prepared for the electron cyclotron heating at 77 GHz has been used for a collective Thomson scattering (CTS) study in LHD. Due to the difficulty in removing fundamental and/or second harmonic resonance in the viewing line of sight, the subtraction of the background ECE from measured signal was performed by modulating the probe beam power from a gyrotron. The separation of the scattering component from the background has been performed successfully taking into account the response time difference between both high-energy and bulk components. The other separation was attempted by fast scanning the viewing beam across the probing beam. It is found that the intensity of the scattered spectrum corresponding to the bulk and high energy components were almost proportional to the calculated scattering volume in the relatively low density region, while appreciable background scattered component remains even in the off volume in some high density cases. The ray-trace code TRAVIS is used to estimate the change in the scattering volume due to probing and receiving beam deflection effect.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
Acoustic bubble removal method
NASA Technical Reports Server (NTRS)
Trinh, E. H.; Elleman, D. D.; Wang, T. G. (Inventor)
1983-01-01
A method is described for removing bubbles from a liquid bath such as a bath of molten glass to be used for optical elements. Larger bubbles are first removed by applying acoustic energy resonant to a bath dimension to drive the larger bubbles toward a pressure well where the bubbles can coalesce and then be more easily removed. Thereafter, submillimeter bubbles are removed by applying acoustic energy of frequencies resonant to the small bubbles to oscillate them and thereby stir liquid immediately about the bubbles to facilitate their breakup and absorption into the liquid.
Langston, Cathy; Gisselman, Kelly; Palma, Douglas; McCue, John
2010-06-01
Multiple techniques exist to remove uroliths from each section of the urinary tract. Minimally invasive methods for removing lower urinary tract stones include voiding urohydropropulsion, retrograde urohydropropulsion followed by dissolution or removal, catheter retrieval, cystoscopic removal, and cystoscopy-assisted laser lithotripsy and surgery. Laparoscopic cystotomy is less invasive than surgical cystotomy. Extracorporeal shock wave lithotripsy can be used for nephroliths and ureteroliths. Nephrotomy, pyelotomy, or urethrotomy may be recommended in certain situations. This article discusses each technique and gives guidance for selecting the most appropriate technique for an individual patient. PMID:20949423
OPTIMIZING ARSENIC REMOVAL DURING IRON REMOVAL PROCESSES
The recently promulgated Arsenic rule will require that many new drinking water systems treat their water to remove arsenic. Many groundwaters that have arsenic in their source water also have iron in their water. As a result, arsenic treatment at these sites will most likely b...
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the
NASA Technical Reports Server (NTRS)
Lappan, Cara-Lyn; Randall, David A.
2001-01-01
The dissipation parameterizations developed for higher-order closure are used to parameterize lateral entrainment and detrainment in a mass-flux model. In addition, a subplume-scale turbulence scheme is included to represent fluxes not captured in the conventional mass-flux framework. These new parameterizations are tested by simulating trade wind cumulus from the Barbados Oceanographic and Meteorological Experiment (BOMEX).
NASA Technical Reports Server (NTRS)
Fritsch, J. Michael; Kain, John S.
1997-01-01
Research efforts during the second year have centered on improving the manner in which convective stabilization is achieved in the Penn State/NCAR mesoscale model MM5. Ways of improving this stabilization have been investigated by (1) refining the partitioning between the Kain-Fritsch convective parameterization scheme and the grid scale by introducing a form of moist convective adjustment; (2) using radar data to define locations of subgrid-scale convection during a dynamic initialization period; and (3) parameterizing deep-convective feedbacks as subgrid-scale sources and sinks of mass. These investigations were conducted by simulating a long-lived convectively-generated mesoscale vortex that occurred during 14-18 Jul. 1982 and the 10-11 Jun. 1985 squall line that occurred over the Kansas-Oklahoma region during the PRE-STORM experiment. The long-lived vortex tracked across the central Plains states and was responsible for multiple convective outbreaks during its lifetime.
The interpretation of remotely sensed cloud properties from a model parameterization perspective
1995-09-01
The goals of ISCCP and FIRE are, broadly speaking, to provide methods for the retrieval of cloud properties from satellites, and to improve cloud radiation models and the parameterization of clouds in GCMs. This study suggests a direction for GCM cloud parameterizations based on analysis of Landsat and ISCCP satellite data. For low level single layer clouds it is found that the mean retrieved liquid water pathe in cloudy pixels is essentially invariant to the cloud fraction, at least in the range 0.2 - 0.8. This result is very important since it allows the cloud fraction to be estimated if the mean liquid water path of cloud in a general circulation model gridcell is known. 3 figs.
Parameterizing large-scale dynamics with the weak pressure gradient approximation
NASA Astrophysics Data System (ADS)
Edman, J. P.; Romps, D. M.
2013-12-01
Cloud-resolving and single-column models are useful tools for understanding the dynamics of convection and developing convective parameterizations. However, these tools are severely limited by their inherent inability to simulate the dynamics of the environment in which they are imagined to be immersed. Previous attempts to solve this problem have resulted in various ';supra-domain scale' parameterizations, which allow the model to prescribe its own vertical velocity profile based on some limited information about the external environment (e.g. pressure and potential temperature profiles). Here we present a new implementation of one of these schemes, the weak pressure gradient approximation (WPG), which is shown to reproduce both the transient and steady state dynamics of a 3D atmosphere in a single column. Further, we demonstrate the skill of this new WPG method at replicating observed time series of precipitation and vertical velocity in a series of cloud-resolving simulations.
Parameterizing the Simplest Grassmann-Gaussian Relations for Pachner Move 3-3
NASA Astrophysics Data System (ADS)
Korepanov, Igor G.; Sadykov, Nurlan M.
2013-08-01
We consider relations in Grassmann algebra corresponding to the four-dimensional Pachner move 3-3, assuming that there is just one Grassmann variable on each 3-face, and a 4-simplex weight is a Grassmann-Gaussian exponent depending on these variables on its five 3-faces. We show that there exists a large family of such relations; the problem is in finding their algebraic-topologically meaningful parameterization. We solve this problem in part, providing two nicely parameterized subfamilies of such relations. For the second of them, we further investigate the nature of some of its parameters: they turn out to correspond to an exotic analogue of middle homologies. In passing, we also provide the 2-4 Pachner move relation for this second case.
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
Sensitivity of hurricane forecasts to cumulus parameterizations in the HWRF model
NASA Astrophysics Data System (ADS)
Biswas, Mrinal K.; Bernardet, Ligia; Dudhia, Jimy
2014-12-01
The Developmental Testbed Center used the Hurricane Weather Research and Forecasting (HWRF) system to test the sensitivity of tropical cyclone track and intensity forecasts to different convective schemes. A control configuration that employed the HWRF Simplified Arakawa Scheme (SAS) was compared with the Kain-Fritsch and Tiedtke schemes, as well as with a newer implementation of the SAS. A comprehensive test for Atlantic and Eastern North Pacific storms shows that the SAS scheme produces the best track forecasts. Even though the convective parameterization was absent in the inner 3 km nest, the intensity forecasts are sensitive to the choice of cumulus scheme on the outer grids. The impact of convective-scale heating on the environmental flow accumulates in time since the hurricane vortex is cycled in the HWRF model initialization. This study shows that, for a given forecast, the sensitivity to cumulus parameterization combines the influence of physics and initial conditions.
Dana E. Veron
2012-04-09
This project had two primary goals: (1) development of stochastic radiative transfer as a parameterization that could be employed in an AGCM environment, and (2) exploration of the stochastic approach as a means for representing shortwave radiative transfer through mixed-phase layer clouds. To achieve these goals, climatology of cloud properties was developed at the ARM CART sites, an analysis of the performance of the stochastic approach was performed, a simple stochastic cloud-radiation parameterization for an AGCM was developed and tested, a statistical description of Arctic mixed phase clouds was developed and the appropriateness of stochastic approach for representing radiative transfer through mixed-phase clouds was assessed. Significant progress has been made in all of these areas and is detailed in the final report.
NASA Technical Reports Server (NTRS)
Boers, R.; Eloranta, E. W.; Coulter, R. L.
1984-01-01
Ground based lidar measurements of the atmospheric mixed layer depth, the entrainment zone depth and the wind speed and wind direction were used to test various parameterized entrainment models of mixed layer growth rate. Six case studies under clear air convective conditions over flat terrain in central Illinois are presented. It is shown that surface heating alone accounts for a major portion of the rise of the mixed layer on all days. A new set of entrainment model constants was determined which optimized height predictions for the dataset. Under convective conditions, the shape of the mixed layer height prediction curves closely resembled the observed shapes. Under conditions when significant wind shear was present, the shape of the height prediction curve departed from the data suggesting deficiencies in the parameterization of shear production. Development of small cumulus clouds on top of the layer is shown to affect mixed layer depths in the afternoon growth phase.
Veron, Dana E
2009-03-12
This project had two primary goals: 1) development of stochastic radiative transfer as a parameterization that could be employed in an AGCM environment, and 2) exploration of the stochastic approach as a means for representing shortwave radiative transfer through mixed-phase layer clouds. To achieve these goals, an analysis of the performance of the stochastic approach was performed, a simple stochastic cloud-radiation parameterization for an AGCM was developed and tested, a statistical description of Arctic mixed phase clouds was developed and the appropriateness of stochastic approach for representing radiative transfer through mixed-phase clouds was assessed. Significant progress has been made in all of these areas and is detailed below.
NASA Astrophysics Data System (ADS)
Romps, David M.
2016-03-01
Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, which is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.
A cumulus parameterization scheme designed for nested grid meso-{beta} scale models
Weissbluth, M.J.; Cotton, W.R.
1991-12-31
A generalized cumulus parameterization based upon higher order turbulence closure has been incorporated into one dimensional simulations. The scheme consists of a level 2.5w turbulence closure scheme mated with a convective adjustment scheme. The convective adjustment scheme includes a gradient term which can be interpreted as either a subsidence term when the scheme is used in large scale models or a mesoscale compensation term when the scheme is used in mesoscale models. The scheme also includes a convective adjustment term which is interpreted as a detrainment term in large scale models. In mesoscale models, the mesoscale compensation term and the advection by the mean vertical motions combine to yield no net advection which is desirable since the convective moistening and heating is now wholly accomplished by the convective adjustment term; double counting is then explicitly eliminated. One dimensional simulations indicate satisfactory performance of the cumulus parameterization scheme for a non-entraining updraft.
A cumulus parameterization scheme designed for nested grid meso-. beta. scale models
Weissbluth, M.J.; Cotton, W.R.
1991-01-01
A generalized cumulus parameterization based upon higher order turbulence closure has been incorporated into one dimensional simulations. The scheme consists of a level 2.5w turbulence closure scheme mated with a convective adjustment scheme. The convective adjustment scheme includes a gradient term which can be interpreted as either a subsidence term when the scheme is used in large scale models or a mesoscale compensation term when the scheme is used in mesoscale models. The scheme also includes a convective adjustment term which is interpreted as a detrainment term in large scale models. In mesoscale models, the mesoscale compensation term and the advection by the mean vertical motions combine to yield no net advection which is desirable since the convective moistening and heating is now wholly accomplished by the convective adjustment term; double counting is then explicitly eliminated. One dimensional simulations indicate satisfactory performance of the cumulus parameterization scheme for a non-entraining updraft.
Effective Tree Scattering at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; ONeill, Peggy E.; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.
2011-01-01
For routine microwave Soil Moisture (SM) retrieval through vegetation, the tau-omega [1] model [zero-order Radiative Transfer (RT) solution] is attractive due to its simplicity and eases of inversion and implementation. It is the model used in baseline retrieval algorithms for several planned microwave space missions, such as ESA's Soil Moisture Ocean Salinity (SMOS) mission (launched November 2009) and NASA's Soil Moisture Active Passive (SMAP) mission (to be launched 2014/2015) [2 and 3]. These approaches are adapted for vegetated landscapes with effective vegetation parameters tau and omega by fitting experimental data or simulation outputs of a multiple scattering model [4-7]. The model has been validated over grasslands, agricultural crops, and generally light to moderate vegetation. As the density of vegetation increases, sensitivity to the underlying SM begins to degrade significantly and errors in the retrieved SM increase accordingly. The zero-order model also loses its validity when dense vegetation (i.e. forest, mature corn, etc.) includes scatterers, such as branches and trunks (or stalks in the case of corn), which are large with respect to the wavelength. The tau-omega model (when applied over moderately to densely vegetated landscapes) will need modification (in terms of form or effective parameterization) to enable accurate characterization of vegetation parameters with respect to specific tree types, anisotropic canopy structure, presence of leaves and/or understory. More scattering terms (at least up to first-order at L-band) should be included in the RT solutions for forest canopies [8]. Although not really suitable to forests, a zero-order tau-omega model might be applied to such vegetation canopies with large scatterers, but that equivalent or effective parameters would have to be used [4]. This requires that the effective values (vegetation opacity and single scattering albedo) need to be evaluated (compared) with theoretical definitions of
Rayleigh Scattering Diagnostics Workshop
NASA Technical Reports Server (NTRS)
Seasholtz, Richard (Compiler)
1996-01-01
The Rayleigh Scattering Diagnostics Workshop was held July 25-26, 1995 at the NASA Lewis Research Center in Cleveland, Ohio. The purpose of the workshop was to foster timely exchange of information and expertise acquired by researchers and users of laser based Rayleigh scattering diagnostics for aerospace flow facilities and other applications. This Conference Publication includes the 12 technical presentations and transcriptions of the two panel discussions. The first panel was made up of 'users' of optical diagnostics, mainly in aerospace test facilities, and its purpose was to assess areas of potential applications of Rayleigh scattering diagnostics. The second panel was made up of active researchers in Rayleigh scattering diagnostics, and its purpose was to discuss the direction of future work.
NASA Astrophysics Data System (ADS)
Piwinski, A.
Intra-beam scattering is analysed and the rise times or damping times of the beam dimensions are derived. The theoretical results are compared with experimental values obtained on the CERN AA and SPS machines.
Electron scattering from pyrimidine
NASA Astrophysics Data System (ADS)
Colmenares, Rafael; Fuss, Martina C.; Oller, Juan C.; Muñoz, Antonio; Blanco, Francisco; Almeida, Diogo; Limão-Vieira, Paulo; García, Gustavo
2014-04-01
Electron scattering from pyrimidine (C4H4N2) was investigated over a wide range of energies. Following different experimental and theoretical approaches, total, elastic and ionization cross sections as well as electron energy loss distributions were obtained.
Cosmic Ray Scattering Radiography
NASA Astrophysics Data System (ADS)
Morris, C. L.
2015-12-01
Cosmic ray muons are ubiquitous, are highly penetrating, and can be used to measure material densities by either measuring the stopping rate or by measuring the scattering of transmitted muons. The Los Alamos team has studied scattering radiography for a number of applications. Some results will be shown of scattering imaging for a range of practical applications, and estimates will be made of the utility of scattering radiography for nondestructive assessments of large structures and for geological surveying. Results of imaging the core of the Toshiba Nuclear Critical Assembly (NCA) Reactor in Kawasaki, Japan and simulations of imaging the damaged cores of the Fukushima nuclear reactors will be presented. Below is an image made using muons of a core configuration for the NCA reactor.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
Large eddy simulation for evaluating scale-aware subgrid cloud parameterizations
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Baode; Bao, Jian-Wen
2016-04-01
We present results from an ongoing project that uses a Large-Eddy Simulation (LES) model to simulate deep organized convection in the extratropics for the purpose of evaluating scale-aware subgrid convective parameterizations. The simulation is carried out for a classical idealized supercell thunderstorm (Weisman and Klemp, 1982), using a total of 1201 × 1201 × 200 grid points at 100 m spacing in both the horizontal and vertical directions. The characteristics of simulated clouds exhibit a multi-mode vertical distribution ranging from deep to shallow clouds, which is similar to that observed in the real world. To use the LES dataset for evaluating scale-aware subgrid cloud parameterizations, the same case is also run with progressively larger grid sizes of 200 m, 400 m, 600 m, 1 km and 3 km. These simulations show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition, cloud fraction and precipitation rates. They provide useful information about the effect of horizontal grid resolution on the subgrid convective parameterizations. All these simulations reveal a similar multi-mode cloud distribution in the vertical direction. However, there are differences in the updraft-core cloud statistics, and convergence of statistical properties is found only between the LES benchmark and the simulation with grid size smaller than 400 m. Analysis of the LES results indicates that (1) the average subgrid mass flux increases as the horizontal grid size increases; (2) the vertical scale of subgrid transport varies spatially, suggesting a system dependence; and (3) at even 1 km, sub-grid convective transport is still large enough to be accounted for through parameterization.
Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows
Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; Kosovic, Branko
2015-12-08
In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocity and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.
Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows
Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; Kosovic, Branko
2016-07-01
In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocitymore » and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.« less
NASA Astrophysics Data System (ADS)
Litta, A. J.; Chakrapani, B.; Mohankumar, K.
2007-07-01
Heavy rainfall events become significant in human affairs when they are combined with hydrological elements. The problem of forecasting heavy precipitation is especially difficult since it involves making a quantitative precipitation forecast, a problem well recognized as challenging. Chennai (13.04°N and 80.17°E) faced incessant and heavy rain about 27 cm in 24 hours up to 8.30 a.m on 27th October 2005 completely threw life out of gear. This torrential rain caused by deep depression which lay 150km east of Chennai city in Bay of Bengal intensified and moved west north-west direction and crossed north Tamil Nadu and south Andhra Pradesh coast on 28th morning. In the present study, we investigate the predictability of the MM5 mesoscale model using different cumulus parameterization schemes for the heavy rainfall event over Chennai. MM5 Version 3.7 (PSU/NCAR) is run with two-way triply nested grids using Lambert Conformal Coordinates (LCC) with a nest ratio of 3:1 and 23 vertical layers. Grid sizes of 45, 15 and 5 km are used for domains 1, 2 and 3 respectively. The cumulus parameterization schemes used in this study are Anthes-Kuo scheme (AK), the Betts-Miller scheme (BM), the Grell scheme (GR) and the Kain-Fritsch scheme (KF). The present study shows that the prediction of heavy rainfall is sensitive to cumulus parameterization schemes. In the time series of rainfall, Grell scheme is in good agreement with observation. The ideal combination of the nesting domains, horizontal resolution and cloud parameterization is able to simulate the heavy rainfall event both qualitatively and quantitatively.
A parameterization of respiration in frozen soils based on substrate availability
NASA Astrophysics Data System (ADS)
Schaefer, K.; Jafarov, E.
2015-07-01
Respiration in frozen soils is limited to thawed substrate within the thin water films surrounding soil particles. As temperatures decrease and the films become thinner, the available substrate also decreases, with respiration effectively ceasing at -8 °C. Traditional exponential scaling factors to model this effect do not account for substrate availability and do not work at the century to millennial time scales required to model the fate of the nearly 1700 Gt of carbon in permafrost regions. The exponential scaling factor produces a false, continuous loss of simulated permafrost carbon in the 20th century and biases in estimates of potential emissions as permafrost thaws in the future. Here we describe a new frozen biogeochemistry parameterization that separates the simulated carbon into frozen and thawed pools to represent the effects of substrate availability. We parameterized the liquid water fraction as a function of temperature based on observations and use this to transfer carbon between frozen pools and thawed carbon in the thin water films. The simulated volumetric water content (VWC) as a function of temperature is consistent with observed values and the simulated respiration fluxes as a function of temperature are consistent with results from incubation experiments. The amount of organic matter was the single largest influence on simulated VWC and respiration fluxes. Future versions of the parameterization should account for additional, non-linear effects of substrate diffusion in thin water films on simulated respiration. Controlling respiration in frozen soils based on substrate availability allows us to maintain a realistic permafrost carbon pool by eliminating the continuous loss caused by the original exponential scaling factors. The frozen biogeochemistry parameterization is a useful way to represent the effects of substrate availability on soil respiration in model applications that focus on century to millennial time scales in permafrost
A parameterization of respiration in frozen soils based on substrate availability
NASA Astrophysics Data System (ADS)
Schaefer, Kevin; Jafarov, Elchin
2016-04-01
Respiration in frozen soils is limited to thawed substrate within the thin water films surrounding soil particles. As temperatures decrease and the films become thinner, the available substrate also decreases, with respiration effectively ceasing at -8 °C. Traditional exponential scaling factors to model this effect do not account for substrate availability and do not work at the century to millennial timescales required to model the fate of the nearly 1100 Gt of carbon in permafrost regions. The exponential scaling factor produces a false, continuous loss of simulated permafrost carbon in the 20th century and biases in estimates of potential emissions as permafrost thaws in the future. Here we describe a new frozen biogeochemistry parameterization that separates the simulated carbon into frozen and thawed pools to represent the effects of substrate availability. We parameterized the liquid water fraction as a function of temperature based on observations and use this to transfer carbon between frozen pools and thawed carbon in the thin water films. The simulated volumetric water content (VWC) as a function of temperature is consistent with observed values and the simulated respiration fluxes as a function of temperature are consistent with results from incubation experiments. The amount of organic matter was the single largest influence on simulated VWC and respiration fluxes. Future versions of the parameterization should account for additional, non-linear effects of substrate diffusion in thin water films on simulated respiration. Controlling respiration in frozen soils based on substrate availability allows us to maintain a realistic permafrost carbon pool by eliminating the continuous loss caused by the original exponential scaling factors. The frozen biogeochemistry parameterization is a useful way to represent the effects of substrate availability on soil respiration in model applications that focus on century to millennial timescales in permafrost regions.
Global Simulations from CAM with a Unified Convection Parameterization using CLUBB and Subcolumns
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P.; Chen, C. C.; Morrison, H.; Hoft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Ghan, S.; Guo, Z.
2015-12-01
The newest version of the Community Atmosphere Model (CAM) will support subcolumns as a method to better couple sub-grid-scale convective and microphysical processes. We utilize this feature and samples from a PDF-based moist turbulence parameterization to produce a version of CAM where all convection (shallow, stratiform, and deep) is simulated with a single set of dynamic and microphysical equations. We call this version of the model CAM-CLUBB-SILHS, where CLUBB (Cloud Layers Unified By Binormals) is our higher-order closure convection and turbulence parameterization and SILHS (Subgrid Importance Latin Hypercube Sampler) is our sampler and the basis for our subcolumn generation. Each physics timestep in this model, the CLUBB parameterization runs to calculate convective tendencies. In order to close the higher order moments, CLUBB calculates a new multi-variate PDF describing the subgrid distribution of moisture and temperature at each level. SILHS samples from that PDF and creates profiles of vapor, temperature, vertical velocity, cloud water and ice, and cloud water and ice number concentration. The microphysics scheme runs on each subcolumn seperately. The resulting tendencies are averaged together and returned to the model as a grid mean tendency. This use of subcolumns allows us to explicitly represent subgrid scale clouds and moisture distributions for microphysical calculations. Using this framework and no other convective parameterizations, we are able to produce stable, realistic, global atmospheric simulations in CAM. This study will present results from long-term atmospheric simulations, discuss the impact of subcolumns on the model, and show improvements in the model's tropical wave simulation.
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures.
Ashworth, Jennifer C; Mehr, Marco; Buxton, Paul G; Best, Serena M; Cameron, Ruth E
2016-05-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term "interconnectivity" often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures
Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.
2016-01-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449
NASA Astrophysics Data System (ADS)
Qin, Jun; Tang, Wenjun; Yang, Kun; Lu, Ning; Niu, Xiaolei; Liang, Shunlin
2015-05-01
Surface solar irradiance (SSI) is required in a wide range of scientific researches and practical applications. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since SSI is directly measured at a very limited number of stations. Even so, meteorological stations are still sparse, especially in remote areas. Remote sensing can be used to map spatiotemporally continuous SSI. Considering the huge amount of satellite data, coarse-resolution SSI has been estimated for reducing the computational burden when the estimation is based on a complex radiative transfer model. On the other hand, many empirical relationships are used to enhance the retrieval efficiency, but the accuracy cannot be guaranteed out of regions where they are locally calibrated. In this study, an efficient physically based parameterization is proposed to balance computational efficiency and retrieval accuracy for SSI estimation. In this parameterization, the transmittances for gases, aerosols, and clouds are all handled in full band form and the multiple reflections between the atmosphere and surface are explicitly taken into account. The newly proposed parameterization is applied to estimate SSI with both Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric and land products as inputs. These retrievals are validated against in situ measurements at the Surface Radiation Budget Network and at the North China Plain on an instantaneous basis, and moreover, they are validated and compared with Global Energy and Water Exchanges-Surface Radiation Budget and International Satellite Cloud Climatology Project-flux data SSI estimates at radiation stations of China Meteorological Administration on a daily mean basis. The estimation results indicates that the newly proposed SSI estimation scheme can effectively retrieve SSI based on MODIS products with mean root-mean-square errors of about 100 Wm- 1 and 35 Wm- 1 on an instantaneous and daily
NASA Astrophysics Data System (ADS)
Laurent, A.; Fennel, K.; Wilson, R.; Lehrter, J.; Devereux, R.
2016-01-01
Diagenetic processes are important drivers of water column biogeochemistry in coastal areas. For example, sediment oxygen consumption can be a significant contributor to oxygen depletion in hypoxic systems, and sediment-water nutrient fluxes support primary productivity in the overlying water column. Moreover, nonlinearities develop between bottom water conditions and sediment-water fluxes due to loss of oxygen-dependent processes in the sediment as oxygen becomes depleted in bottom waters. Yet, sediment-water fluxes of chemical species are often parameterized crudely in coupled physical-biogeochemical models, using simple linear parameterizations that are only poorly constrained by observations. Diagenetic models that represent sediment biogeochemistry are available, but rarely are coupled to water column biogeochemical models because they are computationally expensive. Here, we apply a method that efficiently parameterizes sediment-water fluxes of oxygen, nitrate and ammonium by combining in situ measurements, a diagenetic model and a parameter optimization method. As a proof of concept, we apply this method to the Louisiana Shelf where high primary production, stimulated by excessive nutrient loads from the Mississippi-Atchafalaya River system, promotes the development of hypoxic bottom waters in summer. The parameterized sediment-water fluxes represent nonlinear feedbacks between water column and sediment processes at low bottom water oxygen concentrations, which may persist for long periods (weeks to months) in hypoxic systems such as the Louisiana Shelf. This method can be applied to other systems and is particularly relevant for shallow coastal and estuarine waters where the interaction between sediment and water column is strong and hypoxia is prone to occur due to land-based nutrient loads.
NASA Astrophysics Data System (ADS)
McCormack, J. P.; Allen, D. R.; Coy, L.; Eckermann, S. D.; Stajner, I.
2005-12-01
The Ozone Mapping and Profiler Suite (OMPS) will deliver real-time ozone data for assimilation in numerical weather prediction (NWP) models. This information will benefit forecasts by improving the modeled stratospheric heating rates and providing better first-guess temperature profiles needed for infrared satellite radiance retrieval algorithms. Operational ozone data assimilation for NWP requires a fast, accurate treatment of stratospheric ozone photochemistry. We present results from the new NRL CHEM2D Ozone Photochemistry Parameterization (CHEM2D-OPP), which is based on output from the zonally averaged NRL-CHEM2D middle atmosphere photochemical-transport model. CHEM2D-OPP is a linearized parameterization of gas-phase stratospheric ozone photochemistry developed for NOGAPS-ALPHA, the Navy's prototype global high altitude NWP model. A recent study of NOGAPS-ALPHA ozone simulations found that a preliminary version of the CHEM2D-based photochemistry parameterization generally performed better than other current photochemistry schemes that are now widely used in operational NWP and data assimilation systems. A new, improved version of CHEM2D-OPP is now available. Here we report the first quantitative performance assessments of the updated CHEM2D-OPP package in the NRL Global Ozone Assimilation Testing System (GOATS). This study compares the mean differences between GOATS ozone analyses and SBUV/2 ozone measurements (both vertical profile and total column) during September 2002 using several different ozone photochemistry schemes. We find that CHEM2D-OPP generally delivers the best performance out of all the photochemistry schemes we tested. Future development plans for CHEM2D-OPP, such as interfacing it with a "cold tracer" parameterization for heterogeneous ozone-hole chemistry, will also be presented.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
The length scale for sub-grid-scale parameterization with anisotropic resolution
NASA Technical Reports Server (NTRS)
Lilly, Douglas K.
1989-01-01
Use of the Smagorinsky eddy-viscosity formulation and related schemes for subgrid-scale parameterization of large eddy simulation models requires specification of a single length scale, earlier related by Lilly to the scale of filtering and/or numerical resolution. An anisotropic integration of the Kolmogoroff enstrophy spectrum allows generalization of that relationship to anisotropic resolution. It is found that the Deardorff assumption is reasonably accurate for small anisotropies and can be simply improved for larger values.
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
Evaluation of a wetland methane emission parameterization for present-day and Last Glacial Maximum
NASA Astrophysics Data System (ADS)
Basu, A.; Schultz, M. G.; Francois, L.
2012-04-01
Wetlands are the largest natural source of atmospheric methane and presumably contribute ~25-40% to its annual budget (~500 Tg). However, there remain considerable uncertainties in estimation of global wetlands and their methane emissivity, given the large domain of their vegetation and hydrological characteristics. In this study, we describe the development of a wetland methane emission model in conjunction with global wetland parameterization at seasonal resolution. Contrary to most of the other modeling studies, our model is based on a simple parameterization and also readily adaptable to different paleo climatic scenarios, in which the role of methane is still largely unexplored. Wetlands with a strong climatic sensitivity are perceived to be a key factor in past changes of atmospheric methane concentration, e.g. the double fold increase since the Last Glacial Maximum (LGM). The present parameterization is primarily based on CARAIB, a large scale dynamic vegetation model designed to study the role of vegetation in the global carbon cycle. Its hydrological module is adept at simulating soil water and several associated hydrological fluxes over various biome types. Our model parameterization uses three basic drivers from CARAIB: soil water, soil temperature and soil carbon content along with high resolution terrain slope data. The emission model is included in the chemistry climate model ECHAM5-MOZ for present day and also used in LGM methane simulations. The model results are evaluated in comparison with atmospheric methane observations from the NOAA-CMDL flask network and ice core records for LGM. We obtained the present day wetland methane source to be 153 Tg/year, which lies near the lower edge of model assumptions. We also discuss the uncertainties of the present day simulation and the impact of emission scaling on atmospheric concentration. The latitudinal distribution of other major methane sources, uncertainties in their budget and their potential role in
Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.
2014-01-01
The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.
NASA Technical Reports Server (NTRS)
Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.
1990-01-01
A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.
Aerosol effects on stratocumulus water paths in a PDF-based parameterization
NASA Astrophysics Data System (ADS)
Guo, H.; Golaz, J.-C.; Donner, L. J.
2011-09-01
Successful simulation of aerosol indirect effects in climate models requires parameterizations that capture the full range of cloud-aerosol interactions, including positive and negative liquid water path (LWP) responses to increasing aerosol concentrations, as suggested by large eddy simulations (LESs). A parameterization based on multi-variate probability density functions with dynamics (MVD PDFs) has been incorporated into the single-column version of GFDL AM3, extended to treat aerosol activation, and coupled with a two-moment microphysics scheme. We use it to explore cloud-aerosol interactions. In agreement with LESs, our single-column simulations produce both positive and negative LWP responses to increasing aerosol concentrations, depending on precipitation and free atmosphere relative humidity. We have conducted sensitivity tests to vertical resolution and droplet sedimentation parameterization. The dependence of sedimentation on cloud droplet size is essential to capture the full LWP responses to aerosols. Further analyses reveal that the MVD PDFs are able to represent changes in buoyancy profiles induced by sedimentation as well as enhanced entrainment efficiency with aerosols comparable to LESs.
NASA Astrophysics Data System (ADS)
Charles, T. K.; Paganin, D. M.; Dowd, R. T.
2016-08-01
Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.
NASA Astrophysics Data System (ADS)
Remesan, R.; Bellerby, T.
2012-04-01
These days as operational real-time flood forecasting and warning systems rely more on high resolution mesoscale models employed with coupling system of hydrological models. So it is inevitable to assess prediction sensitivity or disparity in collection with selection of different cumulus and microphysical parameterization schemes, to assess the possible uncertainties associated with mesoscale downscaling. This study investigates the role of physical parameterization in mesoscale model simulations on simulation of unprecedented heavy rainfall over Yorkshire-Humberside in United Kingdom during 1-14th March, 1999. The study has used a popular mesoscale numerical weather prediction model named Advanced Research Weather Research Forecast model (version 3.3) which was developed at the National Center for Atmospheric Research (NCAR) in the USA. This study has performed a comprehensive evaluation of four cumulus parameterization schemes (CPSs) [Kian-Fritsch (KF), Betts-Miller-Janjic (BMJ) and Grell-Devenyi ensemble (GD)] and five microphysical schemes Lin et al scheme, older Thompson scheme, new Thompson scheme, WRF Single Moment - 6 class scheme, and WRF Single Moment - 5 class scheme] to identify how their inclusion influences the mesoscale model's meteorological parameter estimation capabilities and related uncertainties in prediction. The case study was carried out at the Upper River Derwent catchment in Northern Yorkshire, England using both the ERA-40 reanalysis data and the land based observation data.
Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; Wu, Xianghua; Endo, Satoshi; Cao, Le; Li, Yueqing; Guo, Xiaohao
2016-02-01
This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less
Size-resolved parameterization of primary organic carbon in fresh marine aerosols
Long, Michael S; Keene, William C; Erickson III, David J
2009-12-01
Marine aerosols produced by the bursting of artificially generated bubbles in natural seawater are highly enriched (2 to 3 orders of magnitude based on bulk composition) in marine-derived organic carbon (OC). Production of size-resolved particulate OC was parameterized based on a Langmuir kinetics-type association of OC to bubble plumes in seawater and resulting aerosol as constrained by measurements of aerosol produced from highly productive and oligotrophic seawater. This novel approach is the first to account for the influence of adsorption on the size-resolved association between marine aerosols and OC. Production fluxes were simulated globally with an eight aerosol-size-bin version of the NCAR Community Atmosphere Model (CAM v3.5.07). Simulated number and inorganic sea-salt mass production fell within the range of published estimates based on observationally constrained parameterizations. Because the parameterization does not consider contributions from spume drops, the simulated global mass flux (1.5 x 10{sup 3} Tg y{sup -1}) is near the lower limit of published estimates. The simulated production of aerosol number (2.1 x 10{sup 6} cm{sup -2} s{sup -1}) and OC (49 Tg C y{sup -1}) fall near the upper limits of published estimates and suggest that primary marine aerosols may have greater influences on the physiochemical evolution of the troposphere, radiative transfer and climate, and associated feedbacks on the surface ocean than suggested by previous model studies.
Refinement, Validation and Application of Cloud-Radiation Parameterization in a GCM
Dr. Graeme L. Stephens
2009-04-30
The research performed under this award was conducted along 3 related fronts: (1) Refinement and assessment of parameterizations of sub-grid scale radiative transport in GCMs. (2) Diagnostic studies that use ARM observations of clouds and convection in an effort to understand the effects of moist convection on its environment, including how convection influences clouds and radiation. This aspect focuses on developing and testing methodologies designed to use ARM data more effectively for use in atmospheric models, both at the cloud resolving model scale and the global climate model scale. (3) Use (1) and (2) in combination with both models and observations of varying complexity to study key radiation feedback Our work toward these objectives thus involved three corresponding efforts. First, novel diagnostic techniques were developed and applied to ARM observations to understand and characterize the effects of moist convection on the dynamical and thermodynamical environment in which it occurs. Second, an in house GCM radiative transfer algorithm (BUGSrad) was employed along with an optimal estimation cloud retrieval algorithm to evaluate the ability to reproduce cloudy-sky radiative flux observations. Assessments using a range of GCMs with various moist convective parameterizations to evaluate the fidelity with which the parameterizations reproduce key observable features of the environment were also started in the final year of this award. The third study area involved the study of cloud radiation feedbacks and we examined these in both cloud resolving and global climate models.
Ice Nucleation in Mixed-Phase Clouds: Parameterization Evaluation and Climate Impacts
NASA Astrophysics Data System (ADS)
Liu, X.; Ghan, S. J.; Xie, S.; Boyle, J. S.; Klein, S. A.; Demott, P. J.; Prenni, A. J.
2009-12-01
There are still large uncertainties on ice nucleation mechanisms and ice crystal numbers in mixed-phase clouds, which affects modeled cloud phase, cloud lifetime and radiative properties in the Arctic clouds in global climate models. In this study we evaluate model simulations with three mixed-phase ice nucleation parameterizations (Phillips et al., 2008; DeMott et al., 2009; Meyers et al. 1992) against the Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) observations using the NCAR Community Atmospheric Model Version 4 (CAM4) running in the single column mode (SCAM) and in the CCPP-ARM Parameterization Testbed (CAPT) forecasts. It is found that SCAM and CAPT with the new physically-based ice nucleation schemes (Phillips et al., 2008; DeMott et al., 2009) produce a more realistic simulation of the cloud phase structure and the partitioning of condensed water into liquid droplets against observations during the ISDAC than the CAM with an oversimplified Meyers et al. (1992). Both SCAM simulations and CAPT forecasts suggest that the ice number concentration could play an important role in the simulated mixed-phase cloud microphysics, and thereby needs to be realistically represented in global climate models. The global climate implication of different ice nucleation parameterizations are also be studied.
Comparison and validation of physical wave parameterizations in spectral wave models
NASA Astrophysics Data System (ADS)
Stopa, Justin E.; Ardhuin, Fabrice; Babanin, Alexander; Zieger, Stefan
2016-07-01
Recent developments in the physical parameterizations available in spectral wave models have already been validated, but there is little information on their relative performance especially with focus on the higher order spectral moments and wave partitions. This study concentrates on documenting their strengths and limitations using satellite measurements, buoy spectra, and a comparison between the different models. It is confirmed that all models perform well in terms of significant wave heights; however higher-order moments have larger errors. The partition wave quantities perform well in terms of direction and frequency but the magnitude and directional spread typically have larger discrepancies. The high-frequency tail is examined through the mean square slope using satellites and buoys. From this analysis it is clear that some models behave better than the others, suggesting their parameterizations match the physical processes reasonably well. However none of the models are entirely satisfactory, pointing to poorly constrained parameterizations or missing physical processes. The major space-time differences between the models are related to the swell field which stresses the importance of describing its evolution. An example swell field confirms the wave heights can be notably different between model configurations while the directional distributions remain similar. It is clear that all models have difficulty describing the directional spread. Therefore, knowledge of the source term directional distributions is paramount to improve the wave model physics in the future.
Microphysics Parameterization in Convection and its Effects on Cloud Simulation in the NCAR CAM5
NASA Astrophysics Data System (ADS)
Zhang, G. J.; Song, X.
2010-12-01
Microphysical processes in convection are important to convection-cloud-climate interactions and atmospheric hydrological cycle. They are also essential to understanding aerosol-cloud interaction. However, their parameterization in GCMs is crude. As part of an effort to improve the convection parameterization scheme for the NCAR CAM using observations, we incorporate a cloud microphysics parameterization into the Zhang-McFarlane convection scheme. The scheme is then evaluated against observations of cloud ice and water from the TWP-ICE experiment and other sources using the NCAR SCAM. It is found that this physically-based treatment of convective microphysics yields more realistic vertical profiles of convective cloud ice and liquid water contents. Cloud water and ice budgets are calculated to estimate the role of cloud water and ice detrainment from convection as water and ice sources for large-scale clouds. The new microphysics treatment is further implemented into CAM5 to test its effect on GCM simulations of clouds. Results will be presented at the meeting, and the implications on the simulation of hydrological cycle will be discussed.
NASA Astrophysics Data System (ADS)
Xie, Xin; Zhang, Minghua
2015-08-01
Using long-term radar-based ground measurements from the Atmospheric Radiation Measurement Program, we derive the inhomogeneity of cloud liquid water as represented by the shape parameter of a gamma distribution. The relationship between the inhomogeneity and the model grid size as well as atmospheric condition is presented. A larger grid scale and more unstable atmosphere are associated with larger inhomogeneity that is described by a smaller shape parameter. This relationship is implemented as a scale-aware parameterization of the liquid cloud inhomogeneity in the Community Earth System Model (CESM) in which the shape parameter impacts the cloud microphysical processes. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, it reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller (larger) grid size in high (low) latitudes in the longitude-latitude grid setting of CESM and the more stable (unstable) atmosphere. The single-column model and general circulation model sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. An advantage of the parameterization is that it can recognize the spatial resolutions of the CESM without special tuning of the cloud water inhomogeneity parameter.
NASA Technical Reports Server (NTRS)
Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.
2015-01-01
We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.
Parameterization of incoming longwave radiation at glacier sites in the Canadian Rocky Mountains
NASA Astrophysics Data System (ADS)
Ebrahimi, Samaneh; Marshall, Shawn J.
2015-12-01
We examine longwave radiation fluxes in the Canadian Rocky Mountains based on multiyear observations at glaciers in the southern and northern Rockies. Our main objective is to develop improved parameterizations of incoming longwave radiation for surface energy balance and melt modeling in glaciological studies, in situations where minimal meteorological data are available. We concentrate on the summer melt season, June through August. We test several common parameterizations of mean daily incoming longwave radiation and also explore simple regression-based models of atmospheric emissivity as a function of near-surface vapor pressure, relative humidity, and a sky clearness index (i.e., a proxy for cloud cover). Multivariate regressions based on these three variables have the strongest performance at our two sites, with RMS errors of 9-13 W m-2 and biases 1-2 W m-2 when transferred to different time periods or between sites in our study region. We also find good results for all-sky atmospheric emissivity with a bivariate relation based on vapor pressure and relative humidity. This parameterization requires only screen-level temperature and humidity as input data, which has value for modeling of incoming longwave radiation and surface energy balance when observational radiation and cloud data are not available.
Cotton, W.R.
1997-08-12
This research has focused on the development of a parameterization scheme for mesoscale convective systems (MCSs), to be used in numerical weather prediction models with grid spacing too coarse to explicitly simulate such systems. This is an extension to cumulus parameterization schemes, which have long been used to account for the unresolved effects of convection in numerical models. Although MCSs generally require an extended sequence of numerous deep convective cells in order to develop into their characteristic sizes and to persist for their typical durations, their effects on the large scale environment are significantly different than that due to the collective effects of numerous ordinary deep convective cells. These differences are largely due to a large stratiform cloud that develops fairly early in the MCS life-cycle, where mesoscale circulations and dynamics interact with the environment in ways that call for a distinct MCS parameterization. Comparing an MCS and a collection of deep convection that ingests the same amount of boundary layer air and moisture over an extended several hour period, the MCS will generally generates more stratiform rainfall, produce longer-lasting and optically thicker cirrus, and result in different vertical distributions of large-scale tendencies due to latent heating and moistening, momentum transfers, and radiational heating.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
NASA Astrophysics Data System (ADS)
Kowalsky, M. B.; Finsterle, S.; Williams, K. H.; Murray, C.; Commer, M.; Newcomer, D.; Englert, A.; Steefel, C. I.; Hubbard, S. S.
2012-06-01
In developing a reliable approach for inferring hydrological properties through inverse modeling of tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance, as errors in the model structure are partly compensated for by estimating biased property values during the inversion. These biased estimates, while potentially providing an improved fit to the calibration data, may lead to wrong interpretations and conclusions and reduce the ability of the model to make reliable predictions. We consider the estimation of spatial variations in permeability and several other parameters through inverse modeling of tracer data, specifically synthetic and actual field data associated with the 2007 Winchester experiment from the Department of Energy Rifle site. Characterization is challenging due to the real-world complexities associated with field experiments in such a dynamic groundwater system. Our aim is to highlight and quantify the impact on inversion results of various decisions related to parameterization, such as the positioning of pilot points in a geostatistical parameterization; the handling of up-gradient regions; the inclusion of zonal information derived from geophysical data or core logs; extension from 2-D to 3-D; assumptions regarding the gradient direction, porosity, and the semivariogram function; and deteriorating experimental conditions. This work adds to the relatively limited number of studies that offer guidance on the use of pilot points in complex real-world experiments involving tracer data (as opposed to hydraulic head data).
Device for removing blackheads
Berkovich, Tamara
1995-03-07
A device for removing blackheads from pores in the skin having a elongated handle with a spoon shaped portion mounted on one end thereof, the spoon having multiple small holes piercing therethrough. Also covered is method for using the device to remove blackheads.
NASA Astrophysics Data System (ADS)
Breil, Marcus; Schädler, Gerd
2016-04-01
The aim of the german research program MiKlip II is the development of an operational climate prediction system that can provide reliable forecasts on a decadal time scale. Thereby, one goal of MiKlip II is to investigate the feasibility of regional climate predictions. Results of recent studies indicate that the regional climate is significantly affected by the interactions between the soil, the vegetation and the atmosphere. Thus, within the framework of MiKlip II a workpackage was established to assess the impact of these interactions on the regional decadal climate predictability. In a Regional Climate Model (RCM) the soil-vegetation-atmosphere interactions are represented in a Land Surface Model (LSM). Thereby, the LSM describes the current state of the land surface by calculating the soil temperature, the soil water content and the turbulent heat fluxes, serving the RCM as lower boundary condition. To be able to solve the corresponding equations, soil and vegetation processes are parameterized within the LSM. Such parameterizations are mainly derived from observations. But in most cases observations are temporally and spatially limited and consequently not able to represent the diversity of nature completely. Thus, soil and vegetation parameterizations always exhibit a certain degree of uncertainty. In the presented study, the uncertainties within a LSM are assessed by stochastic variations of the relevant parameterizations in VEG3D, a LSM developed at the Karlsruhe Institute of Technology (KIT). In a first step, stand-alone simulations of VEG3D are realized with varying soil and vegetation parameters, to identify sensitive model parameters. In a second step, VEG3D is coupled to the RCM COSMO-CLM. With this new model system regional decadal hindcast simulations, driven by global simulations of the Max-Planck-Institute for Meteorology Earth System Model (MPI-ESM), are performed for the CORDEX-EU domain in a resolution of 0.22°. The identified sensitive model
NASA Astrophysics Data System (ADS)
Vorobyov, E. I.
2010-01-01
We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve
Further Evaluation of an Urban Canopy Parameterization using VTMX and Urban 2000 Data
Chin, H S; Leach, M J
2004-06-04
Almost two-thirds of the U.S. population live in urbanized areas occupying less than 2% of the landmass. Similar statistics of urbanization exists in other parts of the world. With the rapid growth of the world population, urbanization appears to be an important issue on environmental and health aspects. As a result, the interaction between the urban region and atmospheric processes becomes a very complex problem. Further understanding of this interaction via the surface and/or atmosphere is of importance to improve the weather forecast, and to minimize the loss caused by the weather-related events, or even by the chemical-biological threat. To this end, Brown and Willaims, (1998) first developed an urban canopy scheme to parameterize the urban infrastructure effect. This parameterization accounts for the effects of drag, turbulent production, radiation balance, and anthropogenic and rooftop heating. Further modification was made and tested in our recent sensitivity study for an idealized case using a mesoscale model. Results indicated that the addition of the rooftop surface energy equation enables this parameterization to more realistically simulate the urban infrastructure impact (Chin et al., 2000). To further improve the representation of the urban effect in the mesoscale model, the USGS land-use data with different resolutions (200 and 30 meters) are adopted to derive the urban parameters via a look-up table approach (Leone et al., 2002; Chin et al., 2004). This approach can provide us the key parameters for urban infrastructure and urban surface characteristics to drive the urban canopy parameterization with geographic and temporal dependence. These urban characteristics include urban fraction, roof fraction, building height, anthropogenic heating, surface albedo, surface wetness, and surface roughness. The objective of this study is to evaluate the modified urban canopy parameterization (UCP) with the observed measurements. Another objective is to
NASA Astrophysics Data System (ADS)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very
Barone, Vincenzo; Cacelli, Ivo; De Mitri, Nicola; Licari, Daniele; Monti, Susanna; Prampolini, Giacomo
2013-03-21
The Joyce program is augmented with several new features, including the user friendly Ulysses GUI, the possibility of complete excited state parameterization and a more flexible treatment of the force field electrostatic terms. A first validation is achieved by successfully comparing results obtained with Joyce2.0 to literature ones, obtained for the same set of benchmark molecules. The parameterization protocol is also applied to two other larger molecules, namely nicotine and a coumarin based dye. In the former case, the parameterized force field is employed in molecular dynamics simulations of solvated nicotine, and the solute conformational distribution at room temperature is discussed. Force fields parameterized with Joyce2.0, for both the dye's ground and first excited electronic states, are validated through the calculation of absorption and emission vertical energies with molecular mechanics optimized structures. Finally, the newly implemented procedure to handle polarizable force fields is discussed and applied to the pyrimidine molecule as a test case. PMID:23389748
NASA Astrophysics Data System (ADS)
McLandress, C.
1998-09-01
This tutorial paper discusses the problem of parameterizing unresolved gravity waves in general circulation models (GCMs) of the middle atmosphere. For readers who are unfamiliar with middle atmosphere dynamics a review of the basic dynamics of both the large-scale circulation and internal gravity waves is presented. A fairly detailed and physically-based description is given of several gravity wave drag (GWD) schemes that are currently employed in middle atmosphere GCMs. These include the parameterizations of [McFarlane (1987)], [Medvedev and Klaassen (1995)], and [Hines, 1997a], [Hines, 1997b], which are used in the Canadian Middle Atmosphere Model, as well as the parameterization of [Fritts and Lu (1993)], which is used in the TIME-GCM. Results from a mechanistic model and the two above mentioned GCMs are presented and discussed. This paper is not intended as a review of all GWD parameterizations nor is it meant as a quantitative comparison of the schemes that have been chosen.
An intermediate process-based fire parameterization in Dynamic Global Vegetation Model
NASA Astrophysics Data System (ADS)
Li, F.; Zeng, X.
2011-12-01
An intermediate process-based fire parameterization has been developed for global fire simulation. It fits the framework of Dynamic Global Vegetation Model (DGVM) which has been a pivot component in Earth System Model (ESM). The fire parameterization comprises three parts: fire occurrence, fire spread, and fire impact. In the first part, the number of fires is determined by ignition counts due to anthropogenic and natural causes and three constraints: fuel load, fuel moisture, and human suppression. Human caused ignition and suppression is explicitly considered as a nonlinear function of population density. The fire counts rather than fire occurrence probability is estimated to avoid underestimating the observed high burned area fraction in tropical savannas where fire occurs frequently. In the second part, post-fire region is assumed to be elliptical in shape with the wind direction along the major axis and the point of ignition at one of the foci. Burned area is determined by fire spread rate,fire duration, and fire counts. Mathematical characteristics of ellipse and some mathematical derivations are used to avoid redundant and unreasonable equations and assumptions in the CTEM-FIRE and make the parameterization equations self-consistently. In the third part, the impact of fire on vegetation component and structure, carbon cycle, trace gases and aerosol emissions are taken into account. The new estimates of trace gas and aerosol emissions due to biomass burning offers an interface with aerosol and atmospheric chemistry model in ESMs. Furthermore, in the new fire parameterization, fire occurrence part and fire spread part can be updated hourly or daily, and fire impact part can be updated daily, monthly, or annually. Its flexibility in selection of time-step length makes it easily applied to various DGVMs. The improved Community Land Model 3.0's Dynamic Global Vegetation Model (CLM-DGVM) is used as the model platform to assess the global performance of the new
Fiber optic probe for light scattering measurements
Nave, Stanley E.; Livingston, Ronald R.; Prather, William S.
1995-01-01
A fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman-scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.
Fiber optic probe for light scattering measurements
Nave, S.E.; Livingston, R.R.; Prather, W.S.
1993-01-01
This invention is comprised of a fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman- scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.
Hak, David J; McElvany, Matthew
2008-02-01
Despite advances in metallurgy, fatigue failure of hardware is common when a fracture fails to heal. Revision procedures can be difficult, usually requiring removal of intact or broken hardware. Several different methods may need to be attempted to successfully remove intact or broken hardware. Broken intramedullary nail cross-locking screws may be advanced out by impacting with a Steinmann pin. Broken open-section (Küntscher type) intramedullary nails may be removed using a hook. Closed-section cannulated intramedullary nails require additional techniques, such as the use of guidewires or commercially available extraction tools. Removal of broken solid nails requires use of a commercial ratchet grip extractor or a bone window to directly impact the broken segment. Screw extractors, trephines, and extraction bolts are useful for removing stripped or broken screws. Cold-welded screws and plates can complicate removal of locked implants and require the use of carbide drills or high-speed metal cutting tools. Hardware removal can be a time-consuming process, and no single technique is uniformly successful. PMID:18252842
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
1998-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with (i) parameterization systems used in climate and weather forecast models, and (ii) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type and thickness as functions of large scale conditions that are predicted by global climate models.
Random number generation from spontaneous Raman scattering
NASA Astrophysics Data System (ADS)
Collins, M. J.; Clark, A. S.; Xiong, C.; Mägi, E.; Steel, M. J.; Eggleton, B. J.
2015-10-01
We investigate the generation of random numbers via the quantum process of spontaneous Raman scattering. Spontaneous Raman photons are produced by illuminating a highly nonlinear chalcogenide glass ( As 2 S 3 ) fiber with a CW laser at a power well below the stimulated Raman threshold. Single Raman photons are collected and separated into two discrete wavelength detuning bins of equal scattering probability. The sequence of photon detection clicks is converted into a random bit stream. Postprocessing is applied to remove detector bias, resulting in a final bit rate of ˜650 kb/s. The collected random bit-sequences pass the NIST statistical test suite for one hundred 1 Mb samples, with the significance level set to α = 0.01 . The fiber is stable, robust and the high nonlinearity (compared to silica) allows for a short fiber length and low pump power favourable for real world application.
Hamilton, H W; Hamilton, K R; Lone, F J
1977-05-01
This study compares the efficiency, safety and cost of hair removal before surgery, with a safety razor, an electric clipper and a depilatory. It was found that both the razor and the clipper damaged the surface of the skin, while the depilatory caused a mild lymphocytic reaction in the upper dermis. The depilatory was expensive and may cause sensitivity reactions in a few individuals, but was found to be the easiest and most efficient method of removing hair. It was concluded that if hair has to be removed a depilatory is the agent of choice. PMID:870157
NASA Technical Reports Server (NTRS)
Schaetzel, Klaus
1989-01-01
Since the development of laser light sources and fast digital electronics for signal processing, the classical discipline of light scattering on liquid systems experienced a strong revival plus an enormous expansion, mainly due to new dynamic light scattering techniques. While a large number of liquid systems can be investigated, ranging from pure liquids to multicomponent microemulsions, this review is largely restricted to applications on Brownian particles, typically in the submicron range. Static light scattering, the careful recording of the angular dependence of scattered light, is a valuable tool for the analysis of particle size and shape, or of their spatial ordering due to mutual interactions. Dynamic techniques, most notably photon correlation spectroscopy, give direct access to particle motion. This may be Brownian motion, which allows the determination of particle size, or some collective motion, e.g., electrophoresis, which yields particle mobility data. Suitable optical systems as well as the necessary data processing schemes are presented in some detail. Special attention is devoted to topics of current interest, like correlation over very large lag time ranges or multiple scattering.
Erin A. Miller; Joseph A. Caggiano; Robert C. Runkle; Timothy A. White; Aaron M. Bevill
2011-03-01
As a complement to passive detection systems, radiographic inspection of cargo is an increasingly important tool for homeland security because it has the potential to detect highly attenuating objects associated with special nuclear material or surrounding shielding, in addition to screening for items such as drugs or contraband. Radiographic detection of such threat objects relies on high image contrast between regions of different density and atomic number (Z). Threat detection is affected by scatter of the interrogating beamin the cargo, the radiographic system itself, and the surrounding environment, which degrades image contrast. Here, we estimate the extent to which scatter plays a role in radiographic imaging of cargo containers. Stochastic transport simulations were performed to determine the details of the radiography equipment and surrounding environment, which are important in reproducing measured data and to investigate scatter magnitudes for typical cargo. We find that scatter plays a stronger role in cargo radiography than in typicalmedical imaging scenarios, even for low-density cargo, with scatter-toprimary ratios ranging from 0.14 for very low density cargo, to between 0.20 and 0.40 for typical cargo, and higher yet for dense cargo.
NASA Astrophysics Data System (ADS)
Schaetzel, Klaus
1989-08-01
Since the development of laser light sources and fast digital electronics for signal processing, the classical discipline of light scattering on liquid systems experienced a strong revival plus an enormous expansion, mainly due to new dynamic light scattering techniques. While a large number of liquid systems can be investigated, ranging from pure liquids to multicomponent microemulsions, this review is largely restricted to applications on Brownian particles, typically in the submicron range. Static light scattering, the careful recording of the angular dependence of scattered light, is a valuable tool for the analysis of particle size and shape, or of their spatial ordering due to mutual interactions. Dynamic techniques, most notably photon correlation spectroscopy, give direct access to particle motion. This may be Brownian motion, which allows the determination of particle size, or some collective motion, e.g., electrophoresis, which yields particle mobility data. Suitable optical systems as well as the necessary data processing schemes are presented in some detail. Special attention is devoted to topics of current interest, like correlation over very large lag time ranges or multiple scattering.