Sample records for parameterized scatter removal

  1. Parameterization of single-scattering properties of snow

    NASA Astrophysics Data System (ADS)

    Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.

    2015-02-01

    Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.

  2. Parameterization of single-scattering properties of snow

    NASA Astrophysics Data System (ADS)

    Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.

    2015-06-01

    Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.

  3. Parameterization of single-scattering properties of snow

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Kokhanovsky, Alexander; Guyot, Gwennole; Jourdan, Olivier; Nousiainen, Timo

    2015-04-01

    Snow consists of non-spherical ice grains of various shapes and sizes, which are surrounded by air and sometimes covered by films of liquid water. Still, in many studies, homogeneous spherical snow grains have been assumed in radiative transfer calculations, due to the convenience of using Mie theory. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat scattering phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ=0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function as functions of the size parameter and the real and imaginary parts of the refractive index. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons with spheres and distorted Koch fractals. Further evaluation and validation of the proposed approach against (e.g.) bidirectional reflectance and polarization measurements for snow is planned. At any rate, it seems safe to assume that the OHC selected here

  4. Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Townsend, Lawrence W.

    1992-01-01

    Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.

  5. The influence of Cloud Longwave Scattering together with a state-of-the-art Ice Longwave Optical Parameterization in Climate Model Simulations

    NASA Astrophysics Data System (ADS)

    Chen, Y. H.; Kuo, C. P.; Huang, X.; Yang, P.

    2017-12-01

    Clouds play an important role in the Earth's radiation budget, and thus realistic and comprehensive treatments of cloud optical properties and cloud-sky radiative transfer are crucial for simulating weather and climate. However, most GCMs neglect LW scattering effects by clouds and tend to use inconsistent cloud SW and LW optical parameterizations. Recently, co-authors of this study have developed a new LW optical properties parameterization for ice clouds, which is based on ice cloud particle statistics from MODIS measurements and state-of-the-art scattering calculation. A two-stream multiple-scattering scheme has also been implemented into the RRTMG_LW, a widely used longwave radiation scheme by climate modeling centers. This study is to integrate both the new LW cloud-radiation scheme for ice clouds and the modified RRTMG_LW with scattering capability into the NCAR CESM to improve the cloud longwave radiation treatment. A number of single column model (SCM) simulations using the observation from the ARM SGP site on July 18 to August 4 in 1995 are carried out to assess the impact of new LW optical properties of clouds and scattering-enabled radiation scheme on simulated radiation budget and cloud radiative effect (CRE). The SCM simulation allows interaction between cloud and radiation schemes with other parameterizations, but the large-scale forcing is prescribed or nudged. Comparing to the results from the SCM of the standard CESM, the new ice cloud optical properties alone leads to an increase of LW CRE by 26.85 W m-2 in average, as well as an increase of the downward LW flux at surface by 6.48 W m-2. Enabling LW cloud scattering further increases the LW CRE by another 3.57 W m-2 and the downward LW flux at the surface by 0.2 W m-2. The change of LW CRE is mainly due to an increase of cloud top height, which enhances the LW CRE. A long-term simulation of CESM will be carried out to further understand the impact of such changes on simulated climates.

  6. The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.

  7. Scattering Removal for Finger-Vein Image Restoration

    PubMed Central

    Yang, Jinfeng; Zhang, Ben; Shi, Yihua

    2012-01-01

    Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028

  8. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  9. The Impact of Microstructure on an Accurate Snow Scattering Parameterization at Microwave Wavelengths

    NASA Astrophysics Data System (ADS)

    Honeyager, Ryan

    High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then

  10. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    NASA Astrophysics Data System (ADS)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  11. A Solar Radiation Parameterization for Atmospheric Studies. Volume 15

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J. (Editor)

    1999-01-01

    The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.

  12. Evaluating Model Parameterizations of Submicron Aerosol Scattering and Absorption with in situ Data from ARCTAS 2008

    NASA Technical Reports Server (NTRS)

    Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; hide

    2016-01-01

    Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9- 02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 percent, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GCRT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction

  13. Evaluating model parameterizations of submicron aerosol scattering and absorption with in situ data from ARCTAS 2008

    NASA Astrophysics Data System (ADS)

    Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; Jimenez, Jose L.; Kondo, Yutaka; Sahu, Lokesh K.; Dibb, Jack E.; Wang, Chien

    2016-07-01

    Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction

  14. Parameterization of Cloud Optical Properties for a Mixture of Ice Particles for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.

  15. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  16. Kaon-nucleus scattering

    NASA Technical Reports Server (NTRS)

    Hong, Byungsik; Buck, Warren W.; Maung, Khin M.

    1989-01-01

    Two kinds of number density distributions of the nucleus, harmonic well and Woods-Saxon models, are used with the t-matrix that is taken from the scattering experiments to find a simple optical potential. The parameterized two body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to imaginary part of the forward elastic scattering amplitude, are shown. The eikonal approximation was chosen as the solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.

  17. A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals

    NASA Technical Reports Server (NTRS)

    VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.

    2014-01-01

    A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.

  18. Kaon-nucleus scattering

    NASA Technical Reports Server (NTRS)

    Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.

    1989-01-01

    The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.

  19. Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.

    PubMed

    Wang, Bo; Yang, Xiaolan

    2015-04-01

    The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Progress on wave-ice interactions: satellite observations and model parameterizations

    NASA Astrophysics Data System (ADS)

    Ardhuin, Fabrice; Boutin, Guillaume; Dumont, Dany; Stopa, Justin; Girard-Ardhuin, Fanny; Accensi, Mickael

    2017-04-01

    In the open ocean, numerical wave models have their largest errors near sea ice, and, until recently, virtually no wave data was available in the sea ice to. Further, wave-ice interaction processes may play an important role in the Earth system. In particular, waves may break up an ice layer into floes, with significant impact on air-sea fluxes. With thinner Arctic ice, this process may contribut to the growing similarity between Arctic and Antarctic sea ice. In return, the ice has a strong damping impact on the waves that is highly variable and not understood. Here we report progress on parameterizations of waves interacting with a single ice layer, as implemented in the WAVEWATCH III model (WW3 Development Group, 2016), and based on few in situ observations, but extensive data derived from Synthetic Aperture Radars (SARs). Our parameterizations combine three processes. First a parameterization for the energy-conserving scattering of waves by ice floes (assuming isotropic back-scatter), which has very little effect on dominant waves of periods larger than 7 s, consistent with the observed narrow directional spectra and short travel times. Second, we implemented a basal friction below the ice layer (Stopa et al. The Cryosphere, 2016). Third, we use a secondary creep associated with ice flexure (Cole et al. 1998) adapted to random waves. These three processes (scattering, friction and creep) are strongly dependent on the maximum floe size. We have thus included an estimation of the potential floe size based on an ice flexure failure estimation adapted from Williams et al. (2013). This combination of dissipation and scattering is tested against measured patterns of wave height and directional spreading, and evidence of ice break-up, all obtained from SAR imagery (Ardhuin et al. 2017), and some in situ data (Collins et al. 2015). The combination of creep and friction is required to reproduce a strong reduction in wave attenuation in broken ice as observed by Collins

  1. Parameterization of the Van Hove dynamic self-scattering law Ss(Q,omega)

    NASA Astrophysics Data System (ADS)

    Zetterstrom, P.

    In this paper we present a model of the Van Hove dynamic scattering law SME(Q, omega) based on the maximum entropy principle which is developed for the first time. The model is aimed to be used in the calculation of inelastic corrections to neutron diffraction data. The model is constrained by the first and second frequency moments and detailed balance, but can be expanded to an arbitrary number of frequency moments. The second moment can be varied by an effective temperature to account for the kinetic energy of the atoms. The results are compared with a diffusion model of the scattering law. Finally some calculations of the inelastic self-scattering for a time-of-flight diffractometer are presented. From this we show that the inelastic self-scattering is very sensitive to the details of the dynamic scattering law.

  2. Metadata-assisted nonuniform atmospheric scattering model of image haze removal for medium-altitude unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun

    2017-09-01

    Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.

  3. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  4. The parameterization of microchannel-plate-based detection systems

    NASA Astrophysics Data System (ADS)

    Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.

    2016-10-01

    The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.

  5. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    NASA Astrophysics Data System (ADS)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush

  6. Ionic scattering factors of atoms that compose biological molecules

    PubMed Central

    Matsuoka, Rei; Yamashita, Yoshiki; Yamane, Tsutomu; Kidera, Akinori; Maki-Yonekura, Saori

    2018-01-01

    Ionic scattering factors of atoms that compose biological molecules have been computed by the multi-configuration Dirac–Fock method. These ions are chemically unstable and their scattering factors had not been reported except for O−. Yet these factors are required for the estimation of partial charges in protein molecules and nucleic acids. The electron scattering factors of these ions are particularly important as the electron scattering curves vary considerably between neutral and charged atoms in the spatial-resolution range explored in structural biology. The calculated X-ray and electron scattering factors have then been parameterized for the major scattering curve models used in X-ray and electron protein crystallography and single-particle cryo-EM. The X-ray and electron scattering factors and the fitting parameters are presented for future reference. PMID:29755750

  7. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    NASA Astrophysics Data System (ADS)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density

  8. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE PAGES

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-06

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted

  9. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted

  10. Spectral bidirectional reflectance of Antarctic snow: Measurements and parameterization

    NASA Astrophysics Data System (ADS)

    Hudson, Stephen R.; Warren, Stephen G.; Brandt, Richard E.; Grenfell, Thomas C.; Six, Delphine

    2006-09-01

    The bidirectional reflectance distribution function (BRDF) of snow was measured from a 32-m tower at Dome C, at latitude 75°S on the East Antarctic Plateau. These measurements were made at 96 solar zenith angles between 51° and 87° and cover wavelengths 350-2400 nm, with 3- to 30-nm resolution, over the full range of viewing geometry. The BRDF at 900 nm had previously been measured at the South Pole; the Dome C measurement at that wavelength is similar. At both locations the natural roughness of the snow surface causes the anisotropy of the BRDF to be less than that of flat snow. The inherent BRDF of the snow is nearly constant in the high-albedo part of the spectrum (350-900 nm), but the angular distribution of reflected radiance becomes more isotropic at the shorter wavelengths because of atmospheric Rayleigh scattering. Parameterizations were developed for the anisotropic reflectance factor using a small number of empirical orthogonal functions. Because the reflectance is more anisotropic at wavelengths at which ice is more absorptive, albedo rather than wavelength is used as a predictor in the near infrared. The parameterizations cover nearly all viewing angles and are applicable to the high parts of the Antarctic Plateau that have small surface roughness and, at viewing zenith angles less than 55°, elsewhere on the plateau, where larger surface roughness affects the BRDF at larger viewing angles. The root-mean-squared error of the parameterized reflectances is between 2% and 4% at wavelengths less than 1400 nm and between 5% and 8% at longer wavelengths.

  11. Survey of background scattering from materials found in small-angle neutron scattering.

    PubMed

    Barker, J G; Mildner, D F R

    2015-08-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.

  12. Survey of background scattering from materials found in small-angle neutron scattering

    PubMed Central

    Barker, J. G.; Mildner, D. F. R.

    2015-01-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088

  13. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    USGS Publications Warehouse

    White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John

    2017-01-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush

  14. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  15. Inference of cirrus cloud properties using satellite-observed visible and infrared radiances. I - Parameterization of radiance fields

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Liou, Kuo-Nan; Takano, Yoshihide

    1993-01-01

    The impact of using phase functions for spherical droplets and hexagonal ice crystals to analyze radiances from cirrus is examined. Adding-doubling radiative transfer calculations are employed to compute radiances for different cloud thicknesses and heights over various backgrounds. These radiances are used to develop parameterizations of top-of-the-atmosphere visible reflectance and IR emittance using tables of reflectances as a function of cloud optical depth, viewing and illumination angles, and microphysics. This parameterization, which includes Rayleigh scattering, ozone absorption, variable cloud height, and an anisotropic surface reflectance, reproduces the computed top-of-the-atmosphere reflectances with an accruacy of +/- 6 percent for four microphysical models: 10-micron water droplet, small symmetric crystal, cirrostratus, and cirrus uncinus. The accuracy is twice that of previous models.

  16. Parameterization Interactions in Global Aquaplanet Simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.

    2018-02-01

    Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.

  17. Parameterization of photon beam dosimetry for a linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebron, Sharon; Barraclough, Brendan; Lu, Bo

    2016-02-15

    Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, includingmore » percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup

  18. Stochastic parameterization for light absorption by internally mixed BC/dust in snow grains for application to climate models

    NASA Astrophysics Data System (ADS)

    Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.

    2014-06-01

    A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.

  19. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  20. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  1. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  2. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the

  3. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  4. Brain Surface Conformal Parameterization Using Riemann Surface Structure

    PubMed Central

    Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung

    2011-01-01

    In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336

  5. Born approximation, multiple scattering, and butterfly algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Qiao, Zhijun

    2014-06-01

    Many imaging algorithms have been designed assuming the absence of multiple scattering. In the 2013 SPIE proceeding, we discussed an algorithm for removing high order scattering components from collected data. In this paper, our goal is to continue this work. First, we survey the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in our target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  6. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    NASA Astrophysics Data System (ADS)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions

  7. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  8. Shortwave radiation parameterization scheme for subgrid topography

    NASA Astrophysics Data System (ADS)

    Helbig, N.; LöWe, H.

    2012-02-01

    Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.

  9. Active Subspaces of Airfoil Shape Parameterizations

    NASA Astrophysics Data System (ADS)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  10. Spectral cumulus parameterization based on cloud-resolving model

    NASA Astrophysics Data System (ADS)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  11. Constraints to Dark Energy Using PADE Parameterizations

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.

    2017-07-01

    We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.

  12. A New WRF-Chem Treatment for Studying Regional Scale Impacts of Cloud-Aerosol Interactions in Parameterized Cumuli

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.

    A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have beenmore » implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated

  13. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode

  14. Stochastic Parameterization for Light Absorption by Internally Mixed BC/dust in Snow Grains for Application to Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, K. N.; Takano, Y.; He, Cenlin

    2014-06-27

    A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less

  15. A Novel Shape Parameterization Approach

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.

  16. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  17. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Höft, J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  18. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  19. Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Fox-Kemper, B.

    2016-02-01

    The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  20. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  1. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  2. Parameterization of clear-sky surface irradiance and its implications for estimation of aerosol direct radiative effect and aerosol optical depth

    PubMed Central

    Xia, Xiangao

    2015-01-01

    Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310

  3. Exploring Alternate Parameterizations for Snowfall with Validation from Satellite and Terrestrial Radars

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.

    2009-01-01

    Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering

  4. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  5. Summary of Cumulus Parameterization Workshop

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh

    2002-01-01

    A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.

  6. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  7. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  8. Parameterized post-Newtonian cosmology

    NASA Astrophysics Data System (ADS)

    Sanghai, Viraj A. A.; Clifton, Timothy

    2017-03-01

    Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).

  9. Empirical parameterization of setup, swash, and runup

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.

    2006-01-01

    Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.

  10. Extensions and applications of a second-order landsurface parameterization

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1983-01-01

    Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.

  11. Parameterization of daily solar global ultraviolet irradiation.

    PubMed

    Feister, U; Jäkel, E; Gericke, K

    2002-09-01

    Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt

  12. Parameterization of air temperature in high temporal and spatial resolution from a combination of the SEVIRI and MODIS instruments

    NASA Astrophysics Data System (ADS)

    Zakšek, Klemen; Schroedter-Homscheidt, Marion

    Some applications, e.g. from traffic or energy management, require air temperature data in high spatial and temporal resolution at two metres height above the ground ( T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (SEVIRI data aboard the MSG and MODIS data aboard Terra and Aqua satellites). The method consists of two parts. First, a downscaling procedure from the SEVIRI pixel resolution of several kilometres to a one kilometre spatial resolution is performed using a regression analysis between the land surface temperature ( LST) and the normalized differential vegetation index ( NDVI) acquired by the MODIS instrument. Second, the lapse rate between the LST and T2m is removed using an empirical parameterization that requires albedo, down-welling surface short-wave flux, relief characteristics and NDVI data. The method was successfully tested for Slovenia, the French region Franche-Comté and southern Germany for the period from May to December 2005, indicating that the parameterization is valid for Central Europe. This parameterization results in a root mean square deviation RMSD of 2.0 K during the daytime with a bias of -0.01 K and a correlation coefficient of 0.95. This is promising, especially considering the high temporal (30 min) and spatial resolution (1000 m) of the results.

  13. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-03-18

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and

  14. Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations

    DOE PAGES

    Liu, Gang; Liu, Yangang; Endo, Satoshi

    2013-02-01

    Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less

  15. A self-consistent high- and low-frequency scattering model for cirrus

    NASA Astrophysics Data System (ADS)

    Baran, Anthony J.; Cotton, Richard; Havemann, Stephan; C.-Labonnote, Laurent; Marenco, Franco

    2013-05-01

    This paper demonstrates that an ensemble model of cirrus ice crystals that follows observed mass-dimensional power laws can predict the scattering properties of cirrus across the electromagnetic spectrum, without the need for tailor made scattering models for particular regions of the spectrum. The ensemble model predicts a mass-dimensional power law of the following form, mass ∝ D2 (where D is the maximum dimension of the ice crystal). This same mass-dimensional power law is applied across the spectrum to predict the particle size distribution (PSD) using a moment estimation parameterization of the PSD. The PSD parameterization predicts the original PSD, using in-situ estimates (bulk measurements) of the ice water content (IWC) and measurements of the in-cloud temperature; the measurements were obtained from a number of mid-latitude cirrus cases, which occurred over the U.K. during the winter and spring of 2010. It is demonstrated that the ensemble model predicts lidar backscatter estimates, at 0.355 μm, of the volume extinction coefficient and total solar optical depth to within current experimental uncertainties, hyperspectral brightness temperature measurements of the terrestrial region (800 cm-1 - 1200 cm-1) to generally well within ±1 K in the window regions, and the 35 GHz radar reflectivity to within ±2 dBZ. Therefore, for simulation of satellite radiances within general circulation models, and retrieval of cirrus properties, scattering models, which are demonstrated to be physically consistent across the electromagnetic spectrum, should be preferred.

  16. Prototype Mcs Parameterization for Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moncrieff, M. W.

    2017-12-01

    Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.

  17. Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions

    NASA Astrophysics Data System (ADS)

    Nelson, K.; Mechem, D. B.

    2014-12-01

    Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.

  18. Anisotropic shear dispersion parameterization for ocean eddy transport

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott; Fox-Kemper, Baylor

    2015-11-01

    The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  19. A Simple Parameterization of 3 x 3 Magic Squares

    ERIC Educational Resources Information Center

    Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich

    2012-01-01

    In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.

  20. Comparison of multiplicity distributions to the negative binomial distribution in muon-proton scattering

    NASA Astrophysics Data System (ADS)

    Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badełek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftáčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jachołkowska, A.; Janata, F.; Jancsó, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettinghale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schneider, A.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.

    1987-09-01

    The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4 20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the negative binomial distribution via Poissonian to the ordinary binomial is observed.

  1. Exploring Alternative Parameterizations for Snowfall with Validation from Satellite and Terrestrial Radars

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.

    2009-01-01

    CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.

  2. Parameterized Cross Sections for Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.

    2000-01-01

    An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.

  3. Parameterizing Coefficients of a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter

  4. Implementation of a Parameterization Framework for Cybersecurity Laboratories

    DTIC Science & Technology

    2017-03-01

    designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of laboratory exercises. A...is to provide the designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of...support might assist the designer of laboratory exercises to achieve the following? 1. Verify that students performed lab exercises, with some

  5. Parameterized examination in econometrics

    NASA Astrophysics Data System (ADS)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  6. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  7. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  8. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    NASA Astrophysics Data System (ADS)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  9. Parameterizing time in electronic health record studies.

    PubMed

    Hripcsak, George; Albers, David J; Perotte, Adler

    2015-07-01

    Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and

  10. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  11. Climate and the equilibrium state of land surface hydrology parameterizations

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.

  12. Particle Identification in Nuclear Emulsion by Measuring Multiple Coulomb Scattering

    NASA Astrophysics Data System (ADS)

    Than Tint, Khin; Nakazawa, Kazuma; Yoshida, Junya; Kyaw Soe, Myint; Mishina, Akihiro; Kinbara, Shinji; Itoh, Hiroki; Endo, Yoko; Kobayashi, Hidetaka; E07 Collaboration

    2014-09-01

    We are developing particle identification techniques for single charged particles such as Xi, proton, K and π by measuring multiple Coulomb scattering in nuclear emulsion. Nuclear emulsion is the best three dimensional detector for double strangeness (S = -2) nuclear system. We expect to accumulate about 10000 Xi-minus stop events which produce double lambda hypernucleus in J-PARC E07 emulsion counter hybrid experiment. The purpose of this particle identification (PID) in nuclear emulsion is to purify Xi-minus stop events which gives information about production probability of double hypernucleus and branching ratio of decay mode. Amount of scattering parameterized as angular distribution and second difference is inversely proportional to the momentum of particle. We produced several thousands of various charged particle tracks in nuclear emulsion stack via Geant4 simulation. In this talk, PID with some measuring methods for multiple scattering will be discussed by comparing with simulation data and real Xi-minus stop events in KEK-E373 experiment.

  13. Testing a common ice-ocean parameterization with laboratory experiments

    NASA Astrophysics Data System (ADS)

    McConnochie, C. D.; Kerr, R. C.

    2017-07-01

    Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.

  14. A second-order Budkyo-type parameterization of landsurface hydrology

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1982-01-01

    A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.

  15. Distributed parameterization of complex terrain

    NASA Astrophysics Data System (ADS)

    Band, Lawrence E.

    1991-03-01

    This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.

  16. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  17. Diffuse scattering in relaxor ferroelectrics: true three-dimensional mapping, experimental artefacts and modelling.

    PubMed

    Bosak, A; Chernyshov, D; Vakhrushev, Sergey; Krisch, M

    2012-01-01

    The available body of experimental data in terms of the relaxor-specific component of diffuse scattering is critically analysed and a collection of related models is reviewed; the sources of experimental artefacts and consequent failures of modelling efforts are enumerated. Furthermore, it is shown that the widely used concept of polar nanoregions as individual static entities is incompatible with the experimental diffuse scattering results. Based on the synchrotron diffuse scattering three-dimensional data set taken for the prototypical ferroelectric relaxor lead magnesium niobate-lead titanate (PMN-PT), a new parameterization of diffuse scattering in relaxors is presented and a simple phenomenological picture is proposed to explain the unusual properties of the relaxor behaviour. The model assumes a specific slowly changing displacement pattern, which is indirectly controlled by the low-energy acoustic phonons of the system. The model provides a qualitative but rather detailed explanation of temperature, pressure and electric-field dependence of diffuse neutron and X-ray scattering, as well as of the existence of a hierarchy in the relaxation times of these materials.

  18. Parameterization of planetary wave breaking in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  19. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  20. Comprehensive assessment of parameterization methods for estimating clear-sky surface downward longwave radiation

    NASA Astrophysics Data System (ADS)

    Guo, Yamin; Cheng, Jie; Liang, Shunlin

    2018-02-01

    Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.

  1. A Comparison of Parameterizations of Secondary Organic Aerosol Production: Global Budget and Spatiotemporal Variability

    NASA Astrophysics Data System (ADS)

    Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.

    2014-12-01

    Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.

  2. Roy-Steiner-equation analysis of pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.

    2016-04-01

    We review the structure of Roy-Steiner equations for pion-nucleon scattering, the solution for the partial waves of the t-channel process ππ → N ¯ N, as well as the high-accuracy extraction of the pion-nucleon S-wave scattering lengths from data on pionic hydrogen and deuterium. We then proceed to construct solutions for the lowest partial waves of the s-channel process πN → πN and demonstrate that accurate solutions can be found if the scattering lengths are imposed as constraints. Detailed error estimates of all input quantities in the solution procedure are performed and explicit parameterizations for the resulting low-energy phase shifts as well as results for subthreshold parameters and higher threshold parameters are presented. Furthermore, we discuss the extraction of the pion-nucleon σ-term via the Cheng-Dashen low-energy theorem, including the role of isospin-breaking corrections, to obtain a precision determination consistent with all constraints from analyticity, unitarity, crossing symmetry, and pionic-atom data. We perform the matching to chiral perturbation theory in the subthreshold region and detail the consequences for the chiral convergence of the threshold parameters and the nucleon mass.

  3. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    NASA Astrophysics Data System (ADS)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  4. A basal stress parameterization for modeling landfast ice

    NASA Astrophysics Data System (ADS)

    Lemieux, Jean-François; Tremblay, L. Bruno; Dupont, Frédéric; Plante, Mathieu; Smith, Gregory C.; Dumont, Dany

    2015-04-01

    Current large-scale sea ice models represent very crudely or are unable to simulate the formation, maintenance and decay of coastal landfast ice. We present a simple landfast ice parameterization representing the effect of grounded ice keels. This parameterization is based on bathymetry data and the mean ice thickness in a grid cell. It is easy to implement and can be used for two-thickness and multithickness category models. Two free parameters are used to determine the critical thickness required for large ice keels to reach the bottom and to calculate the basal stress associated with the weight of the ridge above hydrostatic balance. A sensitivity study was conducted and demonstrates that the parameter associated with the critical thickness has the largest influence on the simulated landfast ice area. A 6 year (2001-2007) simulation with a 20 km resolution sea ice model was performed. The simulated landfast ice areas for regions off the coast of Siberia and for the Beaufort Sea were calculated and compared with data from the National Ice Center. With optimal parameters, the basal stress parameterization leads to a slightly shorter landfast ice season but overall provides a realistic seasonal cycle of the landfast ice area in the East Siberian, Laptev and Beaufort Seas. However, in the Kara Sea, where ice arches between islands are key to the stability of the landfast ice, the parameterization consistently leads to an underestimation of the landfast area.

  5. Parameterization of Photon Tunneling with Application to Ice Cloud Optical Properties at Terrestrial Wavelengths

    NASA Astrophysics Data System (ADS)

    Mitchell, D. L.

    2006-12-01

    Sometimes deep physical insights can be gained through the comparison of two theories of light scattering. Comparing van de Hulst's anomalous diffraction approximation (ADA) with Mie theory yielded insights on the behavior of the photon tunneling process that resulted in the modified anomalous diffraction approximation (MADA). (Tunneling is the process by which radiation just beyond a particle's physical cross-section may undergo large angle diffraction or absorption, contributing up to 40% of the absorption when wavelength and particle size are comparable.) Although this provided a means of parameterizing the tunneling process in terms of the real index of refraction and size parameter, it did not predict the efficiency of the tunneling process, where an efficiency of 100% is predicted for spheres by Mie theory. This tunneling efficiency, Tf, depends on particle shape and ranges from 0 to 1.0, with 1.0 corresponding to spheres. Similarly, by comparing absorption efficiencies predicted by the Finite Difference Time Domain Method (FDTD) with efficiencies predicted by MADA, Tf was determined for nine different ice particle shapes, including aggregates. This comparison confirmed that Tf is a strong function of ice crystal shape, including the aspect ratio when applicable. Tf was lowest (< 0.36) for aggregates and plates, and largest (> 0.9) for quasi- spherical shapes. A parameterization of Tf was developed in terms of (1) ice particle shape and (2) mean particle size regarding the large mode (D > 70 mm) of the ice particle size distribution. For the small mode, Tf is only a function of ice particle shape. When this Tf parameterization is used in MADA, absorption and extinction efficiency differences between MADA and FDTD are within 14% over the terrestrial wavelength range 3-100 mm for all size distributions and most crystal shapes likely to be found in cirrus clouds. Using hyperspectral radiances, it is demonstrated that Tf can be retrieved from ice clouds. Since Tf

  6. Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel

    2018-07-01

    This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single

  7. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  8. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  9. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    NASA Astrophysics Data System (ADS)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  10. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    PubMed Central

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.

    2015-01-01

    Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophos­phodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis. PMID:26249347

  11. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    DOE PAGES

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; ...

    2015-07-28

    Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier'smore » equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. In addition, these methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less

  12. Cloud-radiation interactions and their parameterization in climate models

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.

  13. Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization

    NASA Astrophysics Data System (ADS)

    Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.

    2016-12-01

    Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.

  14. Accuracy of parameterized proton range models; A comparison

    NASA Astrophysics Data System (ADS)

    Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.

    2018-03-01

    An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.

  15. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    PubMed Central

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2014-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986

  16. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    DOE PAGES

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-02

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less

  17. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donahue, Aaron S.; Caldwell, Peter M.

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less

  18. [Discussion of scattering in THz time domain spectrum tests].

    PubMed

    Yan, Fang; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Li, Zhi; Zhang, Han

    2014-06-01

    Using THz-TDS to extract the absorption spectrum of a sample is an important branch of various THz applications. Basically, we believe that the THz radiation scatters from sample particles, leading to an obvious baseline increasing with frequencies in its absorption spectrum. The baseline will affect the measurement accuracy due to ambiguous height and pattern of the spectrum. The authors should try to remove the baseline, and eliminate the effects of scattering. In the present paper, we investigated the causes of baselines, reviewed some of scatter mitigating methods and summarized some of research aspects in the future. In order to validate the correctness of these methods, we designed a series of experiments to compare the computational accuracy of molar concentration. The result indicated that the computational accuracy of molar concentration can be improved, which can be the basis of quantitative analysis in further researches. Finally, with comprehensive experimental results, we presented further research directions on THz absorption spectrum that is needed for the removal of scattering effects.

  19. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  20. Scattering volume in the collective Thomson scattering measurement using high power gyrotron in the LHD

    NASA Astrophysics Data System (ADS)

    Kubo, S.; Nishiura, M.; Tanaka, K.; Moseev, D.; Ogasawara, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Tsujimura, T. I.; Makino, R.

    2016-06-01

    High-power gyrotrons prepared for the electron cyclotron heating at 77 GHz has been used for a collective Thomson scattering (CTS) study in LHD. Due to the difficulty in removing fundamental and/or second harmonic resonance in the viewing line of sight, the subtraction of the background ECE from measured signal was performed by modulating the probe beam power from a gyrotron. The separation of the scattering component from the background has been performed successfully taking into account the response time difference between both high-energy and bulk components. The other separation was attempted by fast scanning the viewing beam across the probing beam. It is found that the intensity of the scattered spectrum corresponding to the bulk and high energy components were almost proportional to the calculated scattering volume in the relatively low density region, while appreciable background scattered component remains even in the off volume in some high density cases. The ray-trace code TRAVIS is used to estimate the change in the scattering volume due to probing and receiving beam deflection effect.

  1. Topside Electron Density Representations for Middle and High Latitudes: A Topside Parameterization for E-CHAIM Based On the NeQuick

    NASA Astrophysics Data System (ADS)

    Themens, David R.; Jayachandran, P. T.; Bilitza, Dieter; Erickson, Philip J.; Häggström, Ingemar; Lyashenko, Mykhaylo V.; Reid, Benjamin; Varney, Roger H.; Pustovalova, Ljubov

    2018-02-01

    In this study, we present a topside model representation to be used by the Empirical Canadian High Arctic Ionospheric Model (E-CHAIM). In the process of this, we also present a comprehensive evaluation of the NeQuick's, and by extension the International Reference Ionosphere's, topside electron density model for middle and high latitudes in the Northern Hemisphere. Using data gathered from all available incoherent scatter radars, topside sounders, and Global Navigation Satellite System Radio Occultation satellites, we show that the current NeQuick parameterization suboptimally represents the shape of the topside electron density profile at these latitudes and performs poorly in the representation of seasonal and solar cycle variations of the topside scale thickness. Despite this, the simple, one variable, NeQuick model is a powerful tool for modeling the topside ionosphere. By refitting the parameters that define the maximum topside scale thickness and the rate of increase of the scale height within the NeQuick topside model function, r and g, respectively, and refitting the model's parameterization of the scale height at the F region peak, H0, we find considerable improvement in the NeQuick's ability to represent the topside shape and behavior. Building on these results, we present a new topside model extension of the E-CHAIM based on the revised NeQuick function. Overall, root-mean-square errors in topside electron density are improved over the traditional International Reference Ionosphere/NeQuick topside by 31% for a new NeQuick parameterization and by 36% for a newly proposed topside for E-CHAIM.

  2. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    NASA Astrophysics Data System (ADS)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  3. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  4. Advancing X-ray scattering metrology using inverse genetic algorithms.

    PubMed

    Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph

    2016-01-01

    We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.

  5. Parameterized spectral distributions for meson production in proton-proton collisions

    NASA Technical Reports Server (NTRS)

    Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.

    1995-01-01

    Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.

  6. Strange resonance poles from Kπ scattering below 1.8 GeV

    NASA Astrophysics Data System (ADS)

    Pelaez, J. R.; Rodas, A.; Ruiz de Elvira, J.

    2017-02-01

    In this work we present a determination of the mass, width, and coupling of the resonances that appear in kaon-pion scattering below 1.8 GeV. These are: the much debated scalar κ -meson, nowadays known as K_0^*(800), the scalar K_0^*(1430), the K^*(892) and K_1^*(1410) vectors, the spin-two K_2^*(1430) as well as the spin-three K^*_3(1780). The parameters will be determined from the pole associated to each resonance by means of an analytic continuation of the Kπ scattering amplitudes obtained in a recent and precise data analysis constrained with dispersion relations, which were not well satisfied in previous analyses. This analytic continuation will be performed by means of Padé approximants, thus avoiding a particular model for the pole parameterization. We also pay particular attention to the evaluation of uncertainties.

  7. Closed-loop multiple-scattering imaging with sparse seismic measurements

    NASA Astrophysics Data System (ADS)

    Berkhout, A. J. Guus

    2018-03-01

    In the theoretical situation of noise-free, complete data volumes (`perfect data'), seismic data matrices are fully filled and multiple-scattering operators have the minimum-phase property. Perfect data allow direct inversion methods to be successful in removing surface and internal multiple scattering. Moreover, under these perfect data conditions direct source wavefields realize complete illumination (no irrecoverable shadow zones) and, therefore, primary reflections (first-order response) can provide us with the complete seismic image. However, in practice seismic measurements always contain noise and we never have complete data volumes at our disposal. We actually deal with sparse data matrices that cannot be directly inverted. The message of this paper is that in practice multiple scattering (including source ghosting) must not be removed but must be utilized. It is explained that in the real world we badly need multiple scattering to fill the illumination gaps in the subsurface. It is also explained that the proposed multiple-scattering imaging algorithm gives us the opportunity to decompose both the image and the wavefields into order-based constituents, making the multiple scattering extension easy to apply. Last but not least, the algorithm allows us to use the minimum-phase property to validate and improve images in an objective way.

  8. Subgrid-scale parameterization and low-frequency variability: a response theory approach

    NASA Astrophysics Data System (ADS)

    Demaeyer, Jonathan; Vannitsem, Stéphane

    2016-04-01

    Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.

  9. Collaborative Project. 3D Radiative Transfer Parameterization Over Mountains/Snow for High-Resolution Climate Models. Fast physics and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, Kuo-Nan

    2016-02-09

    Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS

  10. Scattering of In-Plane Waves by Elastic Wedges

    NASA Astrophysics Data System (ADS)

    Mohammadi, K.; Asimaki, D.; Fradkin, L.

    2014-12-01

    The scattering of seismic waves by elastic wedges has been a topic of interest in seismology and geophysics for many decades. Analytical, semi-analytical, experimental and numerical studies on idealized wedges have provided insight into the seismic behavior of continental margins, mountain roots and crustal discontinuities. Published results, however, have almost exclusively focused on incident Rayleigh waves and out-of-plane body (SH) waves. Complementing the existing body of work, we here present results from our study on the res­ponse of elastic wedges to incident P or SV waves, an idealized pro­blem that can provide valuable insight to the understanding and parameterization of topographic ampli­fication of seismic ground mo­tion. We first show our earlier work on explicit finite difference simulations of SV-wave scattering by elastic wedges over a wide range of internal angles. We next present a semi-analytical solution that we developed using the approach proposed by Gautesen, to describe the scattered wavefield in the immediate vicinity of the wedge's tip (near-field). We use the semi-analytical solution to validate the numerical analyses, and improve resolution of the amplification factor at the wedge vertex that spikes when the internal wedge angle approaches the critical angle of incidence.

  11. Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2015-12-01

    Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.

  12. Impact of Apex Model parameterization strategy on estimated benefit of conservation practices

    USDA-ARS?s Scientific Manuscript database

    Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...

  13. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of

  14. Probabilistic Determination of Green Infrastructure Pollutant Removal Rates from the International Stormwater BMP Database

    NASA Astrophysics Data System (ADS)

    Gilliom, R.; Hogue, T. S.; McCray, J. E.

    2017-12-01

    There is a need for improved parameterization of stormwater best management practices (BMP) performance estimates to improve modeling of urban hydrology, planning and design of green infrastructure projects, and water quality crediting for stormwater management. Percent removal is commonly used to estimate BMP pollutant removal efficiency, but there is general agreement that this approach has significant uncertainties and is easily affected by site-specific factors. Additionally, some fraction of monitored BMPs have negative percent removal, so it is important to understand the probability that a BMP will provide the desired water quality function versus exacerbating water quality problems. The widely used k-C* equation has shown to provide a more adaptable and accurate method to model BMP contaminant attenuation, and previous work has begun to evaluate the strengths and weaknesses of the k-C* method. However, no systematic method exists for obtaining first-order removal rate constants needed to use the k-C* equation for stormwater BMPs; thus there is minimal application of the method. The current research analyzes existing water quality data in the International Stormwater BMP Database to provide screening-level parameterization of the k-C* equation for selected BMP types and analysis of factors that skew the distribution of efficiency estimates from the database. Results illustrate that while certain BMPs are more likely to provide desired contaminant removal than others, site- and design-specific factors strongly influence performance. For example, bioretention systems show both the highest and lowest removal rates of dissolved copper, total phosphorous, and total nitrogen. Exploration and discussion of this and other findings will inform the application of the probabilistic pollutant removal rate constants. Though data limitations exist, this research will facilitate improved accuracy of BMP modeling and ultimately aid decision-making for stormwater quality

  15. Investigating the Sensitivity of Nucleation Parameterization on Ice Growth

    NASA Astrophysics Data System (ADS)

    Gaudet, L.; Sulia, K. J.

    2017-12-01

    The accurate prediction of precipitation from lake-effect snow events associated with the Great Lakes region depends on the parameterization of thermodynamic and microphysical processes, including the formation and subsequent growth of frozen hydrometeors. More specifically, the formation of ice hydrometeors has been represented through varying forms of ice nucleation parameterizations considering the different nucleation modes (e.g., deposition, condensation-freezing, homogeneous). These parameterizations have been developed from in-situ measurements and laboratory observations. A suite of nucleation parameterizations consisting of those published in Meyers et al. (1992) and DeMott et al. (2010) as well as varying ice nuclei data sources are coupled with the Adaptive Habit Model (AHM, Harrington et al. 2013), a microphysics module where ice crystal aspect ratio and density are predicted and evolve in time. Simulations are run with the AHM which is implemented in the Weather Research and Forecasting (WRF) model to investigate the effect of ice nucleation parameterization on the non-spherical growth and evolution of ice crystals and the subsequent effects on liquid-ice cloud-phase partitioning. Specific lake-effect storms that were observed during the Ontario Winter Lake-Effect Systems (OWLeS) field campaign (Kristovich et al. 2017) are examined to elucidate this potential microphysical effect. Analysis of these modeled events is aided by dual-polarization radar data from the WSR-88D in Montague, New York (KTYX). This enables a comparison of the modeled and observed polarmetric and microphysical profiles of the lake-effect clouds, which involves investigating signatures of reflectivity, specific differential phase, correlation coefficient, and differential reflectivity. Microphysical features of lake-effect bands, such as ice, snow, and liquid mixing ratios, ice crystal aspect ratio, and ice density are analyzed to understand signatures in the aforementioned modeled

  16. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.

    2015-07-28

    A method of simulating X-ray diffuse scattering from multi-model PDB files is presented. Despite similar agreement with Bragg data, different translation–libration–screw refinement strategies produce unique diffuse intensity patterns. Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling andmore » validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls-as-xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less

  17. Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.

    2002-01-01

    Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.

  18. A stochastic parameterization for deep convection using cellular automata

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in

  19. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE PAGES

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...

    2017-09-14

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  20. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub

  1. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  2. Parameterization of Small-Scale Processes

    DTIC Science & Technology

    1989-09-01

    1989, Honolulu, Hawaii !7 COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FELD GROUP SIJB- GROUP general...detailed sensitivit. studies to assess the dependence of results on the edd\\ viscosities and diffusivities by a direct comparison with certain observations...better sub-grid scale parameterization is to mount a concerted s .arch for model fits to observations. These would require exhaustive sensitivity studies

  3. Parameterization guidelines and considerations for hydrologic models

    Treesearch

     R. W. Malone; G. Yagow; C. Baffaut; M.W  Gitau; Z. Qi; Devendra Amatya; P.B.   Parajuli; J.V. Bonta; T.R.  Green

    2015-01-01

     Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...

  4. Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations

    PubMed Central

    Nigh, Gordon

    2015-01-01

    Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472

  5. Quantification of Absorption Due to Black and Brown Carbon from Biomass Burning and Parameterizations for Comparison to Climate Models Result

    NASA Astrophysics Data System (ADS)

    Pokhrel, Rudra Prasad

    This dissertation examines the optical properties of fresh and aged biomass burning aerosols, parameterization of these properties, and development of new instrumentation and calibration techniques to measure aerosol optical properties. Data sets were collected from the fourth Fire Lab at Missoula Experiment (FLAME-4) that took place from October 15 to November 16, 2012. Biomass collected from the various parts of the world were burned under controlled laboratory conditions and fresh emissions from different stages of burning were measured and analyzed. Optical properties of aged aerosol under different conditions was also explored. A photoacoustic absorption spectrometer (PAS) was built and integrated with a newly designed thermal denuder to improve upon observations made during Flame-4. A novel calibration technique for the PAS was developed. Single scattering albedo (SSA) and absorption Angstrom exponent (AAE) from 12 different fuels with 41 individual burns were estimated and parameterized with modified combustion efficiency (MCE) and the ratio of elemental carbon (EC) to organic carbon (OC) mass. The EC / OC ratio has better capability to parameterize SSA and AAE than MCE. The simple linear regression model proposed in this study accurately predicts SSA during the first few hours of plume aging with the ambient data from a biomass burning event. In addition, absorption due to brown carbon (BrC) can significantly lower the SSA at 405 nm resulting in a wavelength dependence of SSA. Furthermore, smoldering dominated burns have larger AAE values while flaming dominated burns have smaller AAE values indicating a large fraction of BrC is emitted during the smoldering stage of the burn. Enhancement in BC absorption (EAbs) due to coating by absorbing and non-absorbing substances is estimated at 405 nm and 660 nm. Relatively smaller values of EAbs at 660 nm compared to 405 nm suggests lensing is a less important contributor to biomass burning aerosol absorption at

  6. A new WRF-Chem treatment for studying regional-scale impacts of cloud processes on aerosol and trace gases in parameterized cumuli

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, L. K.; Shrivastava, M.; Easter, R. C.

    A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convectivemore » cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it

  7. A new WRF-Chem treatment for studying regional-scale impacts of cloud processes on aerosol and trace gases in parameterized cumuli

    DOE PAGES

    Berg, L. K.; Shrivastava, M.; Easter, R. C.; ...

    2015-02-24

    A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convectivemore » cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it

  8. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and

  9. The application of depletion curves for parameterization of subgrid variability of snow

    Treesearch

    C. H. Luce; D. G. Tarboton

    2004-01-01

    Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...

  10. Parameterization-based tracking for the P2 experiment

    NASA Astrophysics Data System (ADS)

    Sorokin, Iurii

    2017-08-01

    The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.

  11. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  12. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  13. The Berkeley Out-of-Order Machine (BOOM): An Industry-Competitive, Synthesizable, Parameterized RISC-V Processor

    DTIC Science & Technology

    2015-06-13

    The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor

  14. A parameterization method and application in breast tomosynthesis dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2013-09-15

    Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized usingmore » a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the

  15. A satellite observation test bed for cloud parameterization development

    NASA Astrophysics Data System (ADS)

    Lebsock, M. D.; Suselj, K.

    2015-12-01

    We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.

  16. A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient.

    PubMed

    Midgley, S M

    2004-01-21

    A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.

  17. Using Laboratory Experiments to Improve Ice-Ocean Parameterizations

    NASA Astrophysics Data System (ADS)

    McConnochie, C. D.; Kerr, R. C.

    2017-12-01

    Numerical models of ice-ocean interactions are typically unable to resolve the transport of heat and salt to the ice face. Instead, models rely upon parameterizations that have not been sufficiently validated by observations. Recent laboratory experiments of ice-saltwater interactions allow us to test the standard parameterization of heat and salt transport to ice faces - the three-equation model. The three-equation model predicts that the melt rate is proportional to the fluid velocity while the experimental results typically show that the melt rate is independent of the fluid velocity. By considering an analysis of the boundary layer that forms next to a melting ice face, we suggest a resolution to this disagreement. We show that the three-equation model makes the implicit assumption that the thickness of the diffusive sublayer next to the ice is set by a shear instability. However, at low flow velocities, the sublayer is instead set by a convective instability. This distinction leads to a threshold velocity of approximately 4 cm/s at geophysically relevant conditions, above which the form of the parameterization should be valid. In contrast, at flow speeds below 4 cm/s, the three-equation model will underestimate the melt rate. By incorporating such a minimum velocity into the three-equation model, predictions made by numerical simulations could be easily improved.

  18. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    NASA Astrophysics Data System (ADS)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  19. Modeling inelastic phonon scattering in atomic- and molecular-wire junctions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2005-11-01

    Computationally inexpensive approximations describing electron-phonon scattering in molecular-scale conductors are derived from the nonequilibrium Green’s function method. The accuracy is demonstrated with a first-principles calculation on an atomic gold wire. Quantitative agreement between the full nonequilibrium Green’s function calculation and the newly derived expressions is obtained while simplifying the computational burden by several orders of magnitude. In addition, analytical models provide intuitive understanding of the conductance including nonequilibrium heating and provide a convenient way of parameterizing the physics. This is exemplified by fitting the expressions to the experimentally observed conductances through both an atomic gold wire and a hydrogen molecule.

  20. A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"

    NASA Astrophysics Data System (ADS)

    Jansen, Malte F.

    2017-02-01

    This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.

  1. Polarized scattered light from self-luminous exoplanets. Three-dimensional scattering radiative transfer with ARTES

    NASA Astrophysics Data System (ADS)

    Stolker, T.; Min, M.; Stam, D. M.; Mollière, P.; Dominik, C.; Waters, L. B. F. M.

    2017-11-01

    Context. Direct imaging has paved the way for atmospheric characterization of young and self-luminous gas giants. Scattering in a horizontally-inhomogeneous atmosphere causes the disk-integrated polarization of the thermal radiation to be linearly polarized, possibly detectable with the newest generation of high-contrast imaging instruments. Aims: We aim to investigate the effect of latitudinal and longitudinal cloud variations, circumplanetary disks, atmospheric oblateness, and cloud particle properties on the integrated degree and direction of polarization in the near-infrared. We want to understand how 3D atmospheric asymmetries affect the polarization signal in order to assess the potential of infrared polarimetry for direct imaging observations of planetary-mass companions. Methods: We have developed a three-dimensional Monte Carlo radiative transfer code (ARTES) for scattered light simulations in (exo)planetary atmospheres. The code is applicable to calculations of reflected light and thermal radiation in a spherical grid with a parameterized distribution of gas, clouds, hazes, and circumplanetary material. A gray atmosphere approximation is used for the thermal structure. Results: The disk-integrated degree of polarization of a horizontally-inhomogeneous atmosphere is maximal when the planet is flattened, the optical thickness of the equatorial clouds is large compared to the polar clouds, and the clouds are located at high altitude. For a flattened planet, the integrated polarization can both increase or decrease with respect to a spherical planet which depends on the horizontal distribution and optical thickness of the clouds. The direction of polarization can be either parallel or perpendicular to the projected direction of the rotation axis when clouds are zonally distributed. Rayleigh scattering by submicron-sized cloud particles will maximize the polarimetric signal whereas the integrated degree of polarization is significantly reduced with micron

  2. Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.

    2013-02-01

    In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.

  3. Development of Turbulent Biological Closure Parameterizations

    DTIC Science & Technology

    2011-09-30

    LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.

  4. Evaluation of a scattering correction method for high energy tomography

    NASA Astrophysics Data System (ADS)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  5. Rapid Parameterization Schemes for Aircraft Shape Optimization

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2012-01-01

    A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.

  6. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  7. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    PubMed

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  8. A test harness for accelerating physics parameterization advancements into operations

    NASA Astrophysics Data System (ADS)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a

  9. Prediction by regression and intrarange data scatter in surface-process studies

    USGS Publications Warehouse

    Toy, T.J.; Osterkamp, W.R.; Renard, K.G.

    1993-01-01

    Modeling is a major component of contemporary earth science, and regression analysis occupies a central position in the parameterization, calibration, and validation of geomorphic and hydrologic models. Although this methodology can be used in many ways, we are primarily concerned with the prediction of values for one variable from another variable. Examination of the literature reveals considerable inconsistency in the presentation of the results of regression analysis and the occurrence of patterns in the scatter of data points about the regression line. Both circumstances confound utilization and evaluation of the models. Statisticians are well aware of various problems associated with the use of regression analysis and offer improved practices; often, however, their guidelines are not followed. After a review of the aforementioned circumstances and until standard criteria for model evaluation become established, we recommend, as a minimum, inclusion of scatter diagrams, the standard error of the estimate, and sample size in reporting the results of regression analyses for most surface-process studies. ?? 1993 Springer-Verlag.

  10. Parameterization guidelines and considerations for hydrologic models

    USDA-ARS?s Scientific Manuscript database

    Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...

  11. SU-C-209-03: Anti-Scatter Grid-Line Artifact Minimization for Removing the Grid Lines for Three Different Grids Used with a High Resolution CMOS Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first formore » a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  12. Parameterized and resolved Southern Ocean eddy compensation

    NASA Astrophysics Data System (ADS)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  13. Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule

    PubMed Central

    Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav

    2007-01-01

    The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.

  14. Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization

    NASA Technical Reports Server (NTRS)

    Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.

    2011-01-01

    The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.

  15. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  16. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    NASA Astrophysics Data System (ADS)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  17. Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM

    NASA Technical Reports Server (NTRS)

    Yao, Mao-Sung; Cheng, Ye

    2013-01-01

    The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.

  18. Fits of weak annihilation and hard spectator scattering corrections in B u,d \\wideoverrightarrow VV decays

    NASA Astrophysics Data System (ADS)

    Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling

    2016-10-01

    In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\

  19. Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.

    2015-04-18

    With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.

  20. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Comptonmore » scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.« less

  1. Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil

    USDA-ARS?s Scientific Manuscript database

    The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...

  2. A General Framework for Thermodynamically Consistent Parameterization and Efficient Sampling of Enzymatic Reactions

    PubMed Central

    Saa, Pedro; Nielsen, Lars K.

    2015-01-01

    Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic

  3. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  4. Correction of scatter in megavoltage cone-beam CT

    NASA Astrophysics Data System (ADS)

    Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.

    2001-03-01

    The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.

  5. How to assess the impact of a physical parameterization in simulations of moist convection?

    NASA Astrophysics Data System (ADS)

    Grabowski, Wojciech

    2017-04-01

    A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.

  6. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  7. A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.

    2010-12-01

    A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.

  8. An Accurate Analytic Approximation for Light Scattering by Non-absorbing Spherical Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Lewis, E. R.

    2017-12-01

    The scattering of light by particles in the atmosphere is a ubiquitous and important phenomenon, with applications to numerous fields of science and technology. The problem of scattering of electromagnetic radiation by a uniform spherical particle can be solved by the method of Mie and Debye as a series of terms depending on the size parameter, x=2πr/λ, and the complex index of refraction, m. However, this solution does not provide insight into the dependence of the scattering on the radius of the particle, the wavelength, or the index of refraction, or how the scattering varies with relative humidity. Van de Hulst demonstrated that the scattering efficiency (the scattering cross section divided by the geometric cross section) of a non-absorbing sphere, over a wide range of particle sizes of atmospheric importance, depends not on x and m separately, but on the quantity 2x(m-1); this is the basis for the anomalous diffraction approximation. Here an analytic approximation for the scattering efficiency of a non-absorbing spherical particle is presented in terms of this new quantity that is accurate over a wide range of particle sizes of atmospheric importance and which readily displays the dependences of the scattering efficiency on particle radius, index of refraction, and wavelength. For an aerosol for which the particle size distribution is parameterized as a gamma function, this approximation also yields analytical results for the scattering coefficient and for the Ångström exponent, with the dependences of scattering properties on wavelength and index of refraction clearly displayed. This approximation provides insight into the dependence of light scattering properties on factors such as relative humidity, readily enables conversion of scattering from one index of refraction to another, and demonstrates the conditions under which the aerosol index (the product of the aerosol optical depth and the Ångström exponent) is a useful proxy for the number of cloud

  9. Intercomparison of land-surface parameterizations launched

    NASA Astrophysics Data System (ADS)

    Henderson-Sellers, A.; Dickinson, R. E.

    One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.

  10. Parameterization of plume chemistry into large-scale atmospheric models: Application to aircraft NOx emissions

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.

    2009-10-01

    A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be

  11. Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent; Gettelman, Andrew; Morrison, Hugh

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less

  12. Data-driven RBE parameterization for helium ion beams

    NASA Astrophysics Data System (ADS)

    Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.

    2016-01-01

    Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.

  13. Rainbows, supernumerary rainbows and interference effects in the angular scattering of chemical reactions: an investigation using Heisenberg's S matrix programme.

    PubMed

    Shan, Xiao; Xiahou, Chengkui; Connor, J N L

    2018-01-03

    In earlier research, we have demonstrated that broad "hidden" rainbows can occur in the product differential cross sections (DCSs) of state-to-state chemical reactions. Here we ask the question: can pronounced and localized rainbows, rather than broad hidden ones, occur in reactive DCSs? Further motivation comes from recent measurements by H. Pan and K. Liu, J. Phys. Chem. A, 2016, 120, 6712, of a "bulge" in a reactive DCS, which they conjecture is a rainbow. Our theoretical approach uses a "weak" version of Heisenberg's scattering matrix program (wHSMP) introduced by X. Shan and J. N. L. Connor, Phys. Chem. Chem. Phys., 2011, 13, 8392. This wHSMP uses four general physical principles for chemical reactions to suggest simple parameterized forms for the S matrix; it does not employ a potential energy surface. We use a parameterization in which the modulus of the S matrix is a smooth-step function of the total angular momentum quantum number, J, and (importantly) its phase is a cubic polynomial in J. We demonstrate for a Legendre partial wave series (PWS) the existence of pronounced rainbows, supernumerary rainbows, and other interference effects, in reactive DCSs. We find that reactive rainbows can be more complicated in their structure than the familiar rainbows of elastic scattering. We also analyse the angular scattering using Nearside-Farside (NF) PWS theory and NF PWS Local Angular Momentum (LAM) theory, including resummations of the PWS. In addition, we apply full and NF asymptotic (semiclassical) rainbow theories to the PWS - in particular, the uniform Airy and transitional Airy approximations for the farside scattering. This lets us prove that structure in the DCSs are indeed rainbows, supernumerary rainbows as well as other interference effects.

  14. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  15. Calculation of the hadron contribution from light-by-light scattering to the anomalous (g-2)μ muon magnetic moment for a nonlocal quark model

    NASA Astrophysics Data System (ADS)

    Zhevlakov, A. S.; Radzhabov, A. E.; Dorokhov, A. E.

    2010-11-01

    The muon contribution to the anomalous magnetic moment from light-by-light scattering diagrams with pion participation is calculated for a nonlocal chiral quark model. For various nonlocal model parameterizations, the contribution makes a μ Had,LbL = 5.1(0.2) 10-10. Later on, we plan to calculate contributions from diagrams with an intermediate scalar meson and quark boxing.

  16. Dam removal increases American eel abundance in distant headwater streams

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Eyler, Sheila; Wofford, John E.B.

    2012-01-01

    American eel Anguilla rostrata abundances have undergone significant declines over the last 50 years, and migration barriers have been recognized as a contributing cause. We evaluated eel abundances in headwater streams of Shenandoah National Park, Virginia, to compare sites before and after the removal of a large downstream dam in 2004 (Embrey Dam, Rappahannock River). Eel abundances in headwater streams increased significantly after the removal of Embrey Dam. Observed eel abundances after dam removal exceeded predictions derived from autoregressive models parameterized with data prior to dam removal. Mann–Kendall analyses also revealed consistent increases in eel abundances from 2004 to 2010 but inconsistent temporal trends before dam removal. Increasing eel numbers could not be attributed to changes in local physical habitat (i.e., mean stream depth or substrate size) or regional population dynamics (i.e., abundances in Maryland streams or Virginia estuaries). Dam removal was associated with decreasing minimum eel lengths in headwater streams, suggesting that the dam previously impeded migration of many small-bodied individuals (<300 mm TL). We hypothesize that restoring connectivity to headwater streams could increase eel population growth rates by increasing female eel numbers and fecundity. This study demonstrated that dams may influence eel abundances in headwater streams up to 150 river kilometers distant, and that dam removal may provide benefits for eel management and conservation at the landscape scale.

  17. Development and Testing of Coupled Land-surface, PBL and Shallow/Deep Convective Parameterizations within the MM5

    NASA Technical Reports Server (NTRS)

    Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.

    2000-01-01

    The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.

  18. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  19. FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard C. J. Somerville

    2009-02-27

    Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less

  20. Modeling the clouds on Venus: model development and improvement of a nucleation parameterization

    NASA Astrophysics Data System (ADS)

    Määttänen, Anni; Bekki, Slimane; Vehkamäki, Hanna; Julin, Jan; Montmessin, Franck; Ortega, Ismael K.; Lebonnois, Sébastien

    2014-05-01

    As both the clouds of Venus and aerosols in the Earth's stratosphere are composed of sulfuric acid droplets, we use the 1-D version of a model [1,4] developed for stratospheric aerosols and clouds to study the clouds on Venus. We have removed processes and compounds related to the stratospheric clouds so that the only species remaining are water and sulfuric acid, corresponding to the stratospheric sulfate aerosols, and we have added some key processes. The model describes microphysical processes including condensation/evaporation, and sedimentation. Coagulation, turbulent diffusion, and a parameterization for two-component nucleation [8] of water and sulfuric acid have been added in the model. Since the model describes explicitly the size distribution with a large number of size bins (50-500), it can handle multiple particle modes. The validity ranges of the existing nucleation parameterization [7] have been improved to cover a larger temperature range, and the very low relative humidity (RH) and high sulfuric acid concentrations found in the atmosphere of Venus. We have made several modifications to improve the 2002 nucleation parameterization [7], most notably ensuring that the two-component nucleation model behaves as predicted by the analytical studies at the one-component limit reached at extremely low RH. We have also chosen to use a self-consistent cluster distribution [9], constrained by scaling it to recent quantum chemistry calculations [3]. First tests of the cloud model have been carried out with temperature profiles from VIRA [2] and from the LMD Venus GCM [5], and with a compilation of water vapor and sulfuric acid profiles, as in [6]. The temperature and pressure profiles do not evolve with time, but the vapour profiles naturally change with the cloud. However, no chemistry is included for the moment, so the vapor concentrations are only dependent on the microphysical processes. The model has been run for several hundreds of Earth days to reach a

  1. Effectively parameterizing dissipative particle dynamics using COSMO-SAC: A partition coefficient study

    NASA Astrophysics Data System (ADS)

    Saathoff, Jonathan

    2018-04-01

    Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.

  2. Parameterized reduced order models from a single mesh using hyper-dual numbers

    NASA Astrophysics Data System (ADS)

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  3. Parameterized Complexity of k-Anonymity: Hardness and Tractability

    NASA Astrophysics Data System (ADS)

    Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri

    The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.

  4. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen

    2011-08-16

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less

  5. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2017-04-01

    In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10

  6. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  7. Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony

    2016-08-01

    The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has

  8. Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.

    PubMed

    Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony

    2016-08-21

    The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has

  9. Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures

    PubMed Central

    Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.

    2016-01-01

    Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449

  10. A scheme for parameterizing ice cloud water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  11. Elastic scattering spectroscopy for detection of cancer risk in Barrett's esophagus: experimental and clinical validation of error removal by orthogonal subtraction for increasing accuracy

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Fearn, Tom; MacKenzie, Gary; Clark, Ben; Dunn, Jason M.; Bigio, Irving J.; Bown, Stephen G.; Lovat, Laurence B.

    2009-07-01

    Elastic scattering spectroscopy (ESS) may be used to detect high-grade dysplasia (HGD) or cancer in Barrett's esophagus (BE). When spectra are measured in vivo by a hand-held optical probe, variability among replicated spectra from the same site can hinder the development of a diagnostic model for cancer risk. An experiment was carried out on excised tissue to investigate how two potential sources of this variability, pressure and angle, influence spectral variability, and the results were compared with the variations observed in spectra collected in vivo from patients with Barrett's esophagus. A statistical method called error removal by orthogonal subtraction (EROS) was applied to model and remove this measurement variability, which accounted for 96.6% of the variation in the spectra, from the in vivo data. Its removal allowed the construction of a diagnostic model with specificity improved from 67% to 82% (with sensitivity fixed at 90%). The improvement was maintained in predictions on an independent in vivo data set. EROS works well as an effective pretreatment for Barrett's in vivo data by identifying measurement variability and ameliorating its effect. The procedure reduces the complexity and increases the accuracy and interpretability of the model for classification and detection of cancer risk in Barrett's esophagus.

  12. SU-F-T-147: An Alternative Parameterization of Scatter Behavior Allows Significant Reduction of Beam Characterization for Pencil Beam Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Heuvel, F; Fiorini, F; George, B

    2016-06-15

    Purpose: 1) To describe the characteristics of pencil beam proton dose deposition kernels in a homogenous medium using a novel parameterization. 2) To propose a method utilizing this novel parametrization to reduce the measurements and pre-computation required in commissioning a pencil beam proton therapy system. Methods: Using beam data from a clinical, pencil beam proton therapy center, Monte Carlo simulations were performed to characterize the dose depositions at a range of energies from 100.32 to 226.08 MeV in 3.6MeV steps. At each energy, the beam is defined at the surface of the phantom by a two-dimensional Normal distribution. Using FLUKA,more » the in-medium dose distribution is calculated in 200×200×350 mm cube with 1 mm{sup 3} tally volumes. The calculated dose distribution in each 200×200 slice perpendicular to the beam axis is then characterized using a symmetric alpha-stable distribution centered on the beam axis. This results in two parameters, α and γ, that completely describe shape of the distribution. In addition, the total dose deposited on each slice is calculated. The alpha-stable parameters are plotted as function of the depth in-medium, providing a representation of dose deposition along the pencil beam. We observed that these graphs are isometric through a scaling of both abscissa and ordinate map the curves. Results: Using interpolation of the scaling factors of two source curves representative of different beam energies, we predicted the parameters of a third curve at an intermediate energy. The errors are quantified by the maximal difference and provide a fit better than previous methods. The maximal energy difference between the source curves generating identical curves was 21.14MeV. Conclusion: We have introduced a novel method to parameterize the in-phantom properties of pencil beam proton dose depositions. For the case of the Knoxville IBA system, no more than nine pencil beams have to be fully characterized.« less

  13. Exploring the potential of machine learning to break deadlock in convection parameterization

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  14. A new parameterization of the post-fire snow albedo effect

    NASA Astrophysics Data System (ADS)

    Gleason, K. E.; Nolin, A. W.

    2013-12-01

    Mountain snowpack serves as an important natural reservoir of water: recharging aquifers, sustaining streams, and providing important ecosystem services. Reduced snowpacks and earlier snowmelt have been shown to affect fire size, frequency, and severity in the western United States. In turn, wildfire disturbance affects patterns of snow accumulation and ablation by reducing canopy interception, increasing turbulent fluxes, and modifying the surface radiation balance. Recent work shows that after a high severity forest fire, approximately 60% more solar radiation reaches the snow surface due to the reduction in canopy density. Also, significant amounts of pyrogenic carbon particles and larger burned woody debris (BWD) are shed from standing charred trees, which concentrate on the snowpack, darken its surface, and reduce snow albedo by 50% during ablation. Although the post-fire forest environment drives a substantial increase in net shortwave radiation at the snowpack surface, driving earlier and more rapid melt, hydrologic models do not explicitly incorporate forest fire disturbance effects to snowpack dynamics. The objective of this study was to parameterize the post-fire snow albedo effect due to BWD deposition on snow to better represent forest fire disturbance in modeling of snow-dominated hydrologic regimes. Based on empirical results from winter experiments, in-situ snow monitoring, and remote sensing data from a recent forest fire in the Oregon High Cascades, we characterized the post-fire snow albedo effect, and developed a simple parameterization of snowpack albedo decay in the post-fire forest environment. We modified the recession coefficient in the algorithm: α = α0 + K exp (-nr) where α = snowpack albedo, α0 = minimum snowpack albedo (≈0.4), K = constant (≈ 0.44), -n = number of days since last major snowfall, r = recession coefficient [Rohrer and Braun, 1994]. Our parameterization quantified BWD deposition and snow albedo decay rates and

  15. Global Performance of a Fast Parameterization Scheme for Estimating Surface Solar Radiation from MODIS data

    NASA Astrophysics Data System (ADS)

    Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.

    2016-12-01

    A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.

  16. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  17. New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes

    NASA Astrophysics Data System (ADS)

    Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna

    2018-01-01

    We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.

  18. Stochastic Parameterization: Toward a New View of Weather and Climate Models

    DOE PAGES

    Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...

    2017-03-31

    The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less

  19. Stochastic Parameterization: Toward a New View of Weather and Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berner, Judith; Achatz, Ulrich; Batté, Lauriane

    The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less

  20. Transmittance and scattering during wound healing after refractive surgery

    NASA Astrophysics Data System (ADS)

    Mar, Santiago; Martinez-Garcia, C.; Blanco, J. T.; Torres, R. M.; Gonzalez, V. R.; Najera, S.; Rodriguez, G.; Merayo, J. M.

    2004-10-01

    Photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) are frequent techniques performed to correct ametropia. Both methods have been compared in their way of healing but there is not comparison about transmittance and light scattering during this process. Scattering in corneal wound healing is due to three parameters: cellular size and density, and the size of scar. Increase in the scattering angular width implies a decrease the contrast sensitivity. During wound healing keratocytes activation is induced and these cells become into fibroblasts and myofibroblasts. Hens were operated using PRK and LASIK techniques. Animals used in this experiment were euthanized, and immediately their corneas were removed and placed carefully into a cornea camera support. All optical measurements have been done with a scatterometer constructed in our laboratory. Scattering measurements are correlated with the transmittance -- the smaller transmittance is the bigger scattering is. The aim of this work is to provide experimental data of the corneal transparency and scattering, in order to supply data that they allow generate a more complete model of the corneal transparency.

  1. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5

    EPA Science Inventory

    The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...

  2. Anatomical parameterization for volumetric meshing of the liver

    NASA Astrophysics Data System (ADS)

    Vera, Sergio; González Ballester, Miguel A.; Gil, Debora

    2014-03-01

    A coordinate system describing the interior of organs is a powerful tool for a systematic localization of injured tissue. If the same coordinate values are assigned to specific anatomical landmarks, the coordinate system allows integration of data across different medical image modalities. Harmonic mappings have been used to produce parametric coordinate systems over the surface of anatomical shapes, given their flexibility to set values at specific locations through boundary conditions. However, most of the existing implementations in medical imaging restrict to either anatomical surfaces, or the depth coordinate with boundary conditions is given at sites of limited geometric diversity. In this paper we present a method for anatomical volumetric parameterization that extends current harmonic parameterizations to the interior anatomy using information provided by the volume medial surface. We have applied the methodology to define a common reference system for the liver shape and functional anatomy. This reference system sets a solid base for creating anatomical models of the patient's liver, and allows comparing livers from several patients in a common framework of reference.

  3. Parameterization of Mixed Layer and Deep-Ocean Mesoscales Including Nonlinearity

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Cheng, Y.; Dubovikov, M. S.; Howard, A. M.; Leboissetier, A.

    2018-01-01

    In 2011, Chelton et al. carried out a comprehensive census of mesoscales using altimetry data and reached the following conclusions: "essentially all of the observed mesoscale features are nonlinear" and "mesoscales do not move with the mean velocity but with their own drift velocity," which is "the most germane of all the nonlinear metrics."� Accounting for these results in a mesoscale parameterization presents conceptual and practical challenges since linear analysis is no longer usable and one needs a model of nonlinearity. A mesoscale parameterization is presented that has the following features: 1) it is based on the solutions of the nonlinear mesoscale dynamical equations, 2) it describes arbitrary tracers, 3) it includes adiabatic (A) and diabatic (D) regimes, 4) the eddy-induced velocity is the sum of a Gent and McWilliams (GM) term plus a new term representing the difference between drift and mean velocities, 5) the new term lowers the transfer of mean potential energy to mesoscales, 6) the isopycnal slopes are not as flat as in the GM case, 7) deep-ocean stratification is enhanced compared to previous parameterizations where being more weakly stratified allowed a large heat uptake that is not observed, 8) the strength of the Deacon cell is reduced. The numerical results are from a stand-alone ocean code with Coordinated Ocean-Ice Reference Experiment I (CORE-I) normal-year forcing.

  4. Acoustic radiation force expansions in terms of partial wave phase shifts for scattering: Applications

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.; Zhang, Likun

    2016-11-01

    When evaluating radiation forces on spheres in soundfields (with or without orbital-angular momentum) the interpretation of analytical results is greatly simplified by retaining the use of s-function notation for partial-wave coefficients imported into acoustics from quantum scattering theory in the 1970s. This facilitates easy interpretation of various efficiency factors. For situations in which dissipation is negligible, each partial-wave s-function becomes characterized by a single parameter: a phase shift allowing for all possible situations. These phase shifts are associated with scattering by plane traveling waves and the incident wavefield of interest is separately parameterized. (When considering outcomes, the method of fabricating symmetric objects having a desirable set of phase shifts becomes a separate issue.) The existence of negative radiation force "islands" for beams reported in 2006 by Marston is manifested. This approach and consideration of conservation theorems illustrate the unphysical nature of various claims made by other researchers. This approach is also directly relevant to objects in standing waves. Supported by ONR.

  5. The Impact of Parameterized Convection on Climatological Precipitation in Atmospheric Global Climate Models

    NASA Astrophysics Data System (ADS)

    Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.

    2018-04-01

    Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.

  6. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  7. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    NASA Astrophysics Data System (ADS)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  8. New Layer Thickness Parameterization of Diffusive Convection

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng-Qi; Lu, Yuan-Zheng; Guo, Shuang-Xi; Song, Xue-Long; Qu, Ling; Cen, Xian-Rong; Fer, Ilker

    2017-11-01

    Double-diffusion convection is one of the most important non-mechanically driven mixing processes. Its importance has been particular recognized in oceanography, material science, geology, and planetary physics. Double-diffusion occurs in a fluid in which there are gradients of two (or more) properties with different molecular diffusivities and of opposing effects on the vertical density distribution. It has two primary modes: salt finger and diffusive convection. Recently, the importance of diffusive convection has aroused more interest due to its impact to the diapycnal mixing in the interior ocean and the ice and the ice-melting in the Arctic and Antarctic Oceans. In our recent work, we constructed a length scale of energy-containing eddy and proposed a new layer thickness parameterization of diffusive convection by using the laboratory experiment and in situ observations in the lakes and oceans. The new parameterization can well describe the laboratory convecting layer thicknesses (0.01 0.1 m) and those observed in oceans and lakes (0.1 1000 m). This work was supported by China NSF Grants (41476167,41406035 and 41176027), NSF of Guangdong Province, China (2016A030311042) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA11030302).

  9. Exploring Stratocumulus Cloud-Top Entrainment Processes and Parameterizations by Using Doppler Cloud Radar Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albrecht, Bruce; Fang, Ming; Ghate, Virendra

    2016-02-01

    Observations from an upward-pointing Doppler cloud radar are used to examine cloud-top entrainment processes and parameterizations in a non-precipitating continental stratocumulus cloud deck maintained by time varying surface buoyancy fluxes and cloud-top radiative cooling. Radar and ancillary observations were made at the Atmospheric Radiation Measurement (ARM)’s Southern Great Plains (SGP) site located near Lamont, Oklahoma of unbroken, non-precipitating stratocumulus clouds observed for a 14-hour period starting 0900 Central Standard Time on 25 March 2005. The vertical velocity variance and energy dissipation rate (EDR) terms in a parameterized turbulence kinetic energy (TKE) budget of the entrainment zone are estimated using themore » radar vertical velocity and the radar spectrum width observations from the upward-pointing millimeter cloud radar (MMCR) operating at the SGP site. Hourly averages of the vertical velocity variance term in the TKE entrainment formulation correlates strongly (r=0.72) to the dissipation rate term in the entrainment zone. However, the ratio of the variance term to the dissipation decreases at night due to decoupling of the boundary layer. When the night -time decoupling is accounted for, the correlation between the variance and the EDR term increases (r=0.92). To obtain bulk coefficients for the entrainment parameterizations derived from the TKE budget, independent estimate of entrainment were obtained from an inversion height budget using ARM SGP observations of the local time derivative and the horizontal advection of the cloud-top height. The large-scale vertical velocity at the inversion needed for this budget from EMWF reanalysis. This budget gives a mean entrainment rate for the observing period of 0.76±0.15 cm/s. This mean value is applied to the TKE budget parameterizations to obtain the bulk coefficients needed in these parameterizations. These bulk coefficients are compared with those from previous and are used to in

  10. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  11. Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons

    NASA Technical Reports Server (NTRS)

    Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang

    2008-01-01

    Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.

  12. Mismatch removal via coherent spatial relations

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen

    2014-07-01

    We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.

  13. Removal of atmospheric effects from satellite imagery of the oceans.

    PubMed

    Gordon, H R

    1978-05-15

    In attempting to observe the color of the ocean from satellites, it is necessary to remove the effects of atmospheric and sea surface scattering from the upward radiance at high altitude in order to observe only those photons which were backscattered out of the ocean and hence contain information about subsurface conditions. The observations that (1) the upward radiance from the unwanted photons can be divided into those resulting from Rayleigh scattering alone and those resulting from aerosol scattering alone, (2) the aerosol scattering phase function should be nearly independent of wavelength, and (3) the Rayleigh component can be computed without a knowledge of the sea surface roughness are combined to yield an algorithm for removing a large portion of this unwanted radiance from satellite imagery of the ocean. It is assumed that the ocean is totally absorbing in a band of wavelengths around 750 nm and shown that application of the proposed algorithm to correct the radiance at a wavelength lambda requires only the ratio () of the aerosol optical thickness at lambda to that at about 750 nm. The accuracy to which the correction can be made as a function of the accuracy to which can be found is in detail. A possible method of finding from satellite measurements alone is suggested.

  14. Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals

    NASA Technical Reports Server (NTRS)

    Wang, Meng-Hua; King, Michael D.

    1997-01-01

    We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic

  15. Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows

    DOE PAGES

    Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; ...

    2015-12-08

    In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocitymore » and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.« less

  16. [Steam and air co-injection in removing TCE in 2D-sand box].

    PubMed

    Wang, Ning; Peng, Sheng; Chen, Jia-Jun

    2014-07-01

    Steam and air co-injection is a newly developed and promising soil remediation technique for non-aqueous phase liquids (NAPLs) in vadose zone. In this study, in order to investigate the mechanism of the remediation process, trichloroethylene (TCE) removal using steam and air co-injection was carried out in a 2-dimensional sandbox with different layered sand structures. The results showed that co-injection perfectly improved the "tailing" effect compared to soil vapor extraction (SVE), and the remediation process of steam and air co-injection could be divided into SVE stage, steam strengthening stage and heat penetration stage. Removal ratio of the experiment with scattered contaminant area was higher and removal speed was faster. The removal ratios from the two experiments were 93.5% and 88.2%, and the removal periods were 83.9 min and 90.6 min, respectively. Steam strengthened the heat penetration stage. The temperature transition region was wider in the scattered NAPLs distribution experiment, which reduced the accumulation of TCE. Slight downward movement of TCE was observed in the experiment with TCE initially distributed in a fine sand zone. And such downward movement of TCE reduced the TCE removal ratio.

  17. Computing the scatter component of mammographic images.

    PubMed

    Highnam, R P; Brady, J M; Shepstone, B J

    1994-01-01

    The authors build upon a technical report (Tech. Report OUEL 2009/93, Engng. Sci., Oxford Uni., Oxford, UK, 1993) in which they proposed a model of the mammographic imaging process for which scattered radiation is a key degrading factor. Here, the authors propose a way of estimating the scatter component of the signal at any pixel within a mammographic image, and they use this estimate for model-based image enhancement. The first step is to extend the authors' previous model to divide breast tissue into "interesting" (fibrous/glandular/cancerous) tissue and fat. The scatter model is then based on the idea that the amount of scattered radiation reaching a point is related to the energy imparted to the surrounding neighbourhood. This complex relationship is approximated using published empirical data, and it varies with the size of the breast being imaged. The approximation is further complicated by needing to take account of extra-focal radiation and breast edge effects. The approximation takes the form of a weighting mask which is convolved with the total signal (primary and scatter) to give a value which is input to a "scatter function", approximated using three reference cases, and which returns a scatter estimate. Given a scatter estimate, the more important primary component can be calculated and used to create an image recognizable by a radiologist. The images resulting from this process are clearly enhanced, and model verification tests based on an estimate of the thickness of interesting tissue present proved to be very successful. A good scatter model opens the was for further processing to remove the effects of other degrading factors, such as beam hardening.

  18. Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.

    2013-10-01

    Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.

  19. Evaluation of different parameterizations of the spatial heterogeneity of subsurface storage capacity for hourly runoff simulation in boreal mountainous watershed

    NASA Astrophysics Data System (ADS)

    Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur

    2015-03-01

    Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the

  20. Scattering of clusters of spherical particles—Modeling and inverse problem solution in the Rayleigh-Gans approximation

    NASA Astrophysics Data System (ADS)

    Eliçabe, Guillermo E.

    2013-09-01

    In this work, an exact scattering model for a system of clusters of spherical particles, based on the Rayleigh-Gans approximation, has been parameterized in such a way that it can be solved in inverse form using Thikhonov Regularization to obtain the morphological parameters of the clusters. That is to say, the average number of particles per cluster, the size of the primary spherical units that form the cluster, and the Discrete Distance Distribution Function from which the z-average square radius of gyration of the system of clusters is obtained. The methodology is validated through a series of simulated and experimental examples of x-ray and light scattering that show that the proposed methodology works satisfactorily in unideal situations such as: presence of error in the measurements, presence of error in the model, and several types of unideallities present in the experimental cases.

  1. Climate impacts of parameterized Nordic Sea overflows

    NASA Astrophysics Data System (ADS)

    Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.

    2010-11-01

    A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North

  2. A Robust Parameterization of Human Gait Patterns Across Phase-Shifting Perturbations

    PubMed Central

    Villarreal, Dario J.; Poonawala, Hasan A.; Gregg, Robert D.

    2016-01-01

    The phase of human gait is difficult to quantify accurately in the presence of disturbances. In contrast, recent bipedal robots use time-independent controllers relying on a mechanical phase variable to synchronize joint patterns through the gait cycle. This concept has inspired studies to determine if human joint patterns can also be parameterized by a mechanical variable. Although many phase variable candidates have been proposed, it remains unclear which, if any, provide a robust representation of phase for human gait analysis or control. In this paper we analytically derive an ideal phase variable (the hip phase angle) that is provably monotonic and bounded throughout the gait cycle. To examine the robustness of this phase variable, ten able-bodied human subjects walked over a platform that randomly applied phase-shifting perturbations to the stance leg. A statistical analysis found the correlations between nominal and perturbed joint trajectories to be significantly greater when parameterized by the hip phase angle (0.95+) than by time or a different phase variable. The hip phase angle also best parameterized the transient errors about the nominal periodic orbit. Finally, interlimb phasing was best explained by local (ipsilateral) hip phase angles that are synchronized during the double-support period. PMID:27187967

  3. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    NASA Astrophysics Data System (ADS)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  4. Parameterized cross sections for Coulomb dissociation in heavy-ion collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Cucinotta, F. A.; Townsend, L. W.; Badavi, F. F.

    1988-01-01

    Simple parameterizations of Coulomb dissociation cross sections for use in heavy-ion transport calculations are presented and compared to available experimental dissociation data. The agreement between calculation and experiment is satisfactory considering the simplicity of the calculations.

  5. Why different gas flux velocity parameterizations result in so similar flux results in the North Atlantic?

    NASA Astrophysics Data System (ADS)

    Piskozub, Jacek; Wróbel, Iwona

    2016-04-01

    The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations

  6. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  7. Effective Tree Scattering at L-Band

    NASA Technical Reports Server (NTRS)

    Kurum, Mehmet; ONeill, Peggy E.; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.

    2011-01-01

    For routine microwave Soil Moisture (SM) retrieval through vegetation, the tau-omega [1] model [zero-order Radiative Transfer (RT) solution] is attractive due to its simplicity and eases of inversion and implementation. It is the model used in baseline retrieval algorithms for several planned microwave space missions, such as ESA's Soil Moisture Ocean Salinity (SMOS) mission (launched November 2009) and NASA's Soil Moisture Active Passive (SMAP) mission (to be launched 2014/2015) [2 and 3]. These approaches are adapted for vegetated landscapes with effective vegetation parameters tau and omega by fitting experimental data or simulation outputs of a multiple scattering model [4-7]. The model has been validated over grasslands, agricultural crops, and generally light to moderate vegetation. As the density of vegetation increases, sensitivity to the underlying SM begins to degrade significantly and errors in the retrieved SM increase accordingly. The zero-order model also loses its validity when dense vegetation (i.e. forest, mature corn, etc.) includes scatterers, such as branches and trunks (or stalks in the case of corn), which are large with respect to the wavelength. The tau-omega model (when applied over moderately to densely vegetated landscapes) will need modification (in terms of form or effective parameterization) to enable accurate characterization of vegetation parameters with respect to specific tree types, anisotropic canopy structure, presence of leaves and/or understory. More scattering terms (at least up to first-order at L-band) should be included in the RT solutions for forest canopies [8]. Although not really suitable to forests, a zero-order tau-omega model might be applied to such vegetation canopies with large scatterers, but that equivalent or effective parameters would have to be used [4]. This requires that the effective values (vegetation opacity and single scattering albedo) need to be evaluated (compared) with theoretical definitions of

  8. Parameterization of Transport and Period Matrices with X-Y Coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courant, E. D.

    A parameterization of 4x4 matrices describing linear beam transport systems has been obtained by Edwards and Teng. Here we extend their formalism to include dispersive effects, and give perscriptions for incorporating it in the program SYNCH.

  9. The Grell-Freitas Convection Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Technical Reports Server (NTRS)

    Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.

    2017-01-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.

  10. Observational Study and Parameterization of Aerosol-fog Interactions

    NASA Astrophysics Data System (ADS)

    Duan, J.; Guo, X.; Liu, Y.; Fang, C.; Su, Z.; Chen, Y.

    2014-12-01

    Studies have shown that human activities such as increased aerosols affect fog occurrence and properties significantly, and accurate numerical fog forecasting depends on, to a large extent, parameterization of fog microphysics and aerosol-fog interactions. Furthermore, fogs can be considered as clouds near the ground, and enjoy an advantage of permitting comprehensive long-term in-situ measurements that clouds do not. Knowledge learned from studying aerosol-fog interactions will provide useful insights into aerosol-cloud interactions. To serve the twofold objectives of understanding and improving parameterizations of aerosol-fog interactions and aerosol-cloud interactions, this study examines the data collected from fogs, with a focus but not limited to the data collected in Beijing, China. Data examined include aerosol particle size distributions measured by a Passive Cavity Aerosol Spectrometer Probe (PCASP-100X), fog droplet size distributions measured by a Fog Monitor (FM-120), Cloud Condensation Nuclei (CCN), liquid water path measured by radiometers and visibility sensors, along with meteorological variables measured by a Tethered Balloon Sounding System (XLS-Ⅱ) and Automatic Weather Station (AWS). The results will be compared with low-level clouds for similarities and differences between fogs and clouds.

  11. Sensitivity of liquid clouds to homogenous freezing parameterizations.

    PubMed

    Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas

    2015-03-16

    Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at -40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as -30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>-35°C) and low (<-38°C) temperatures. Homogeneous freezing may be significant as warm as -30°CHomogeneous freezing should not be represented by a threshold approximationThere is a need for an improved parameterization of homogeneous ice nucleation.

  12. Dissolved Nutrient Removal in River Networks: When and Where

    NASA Astrophysics Data System (ADS)

    Ye, S.; Ran, Q.

    2017-12-01

    Along the river network, water, sediment, and nutrients are transported, cycled, and altered by coupled hydrological and biogeochemical processes. Due to increasing human activities such as urbanization, and fertilizer application associated with agricultural land use, nitrogen and phosphorus inputs to aquatic ecosystems have increased dramatically since the beginning of the 20th century. Meanwhile, our current understanding of the rates and processes controlling the cycling and removal of dissolved inorganic nutrients in river networks is still limited due to a lack of empirical measurements, especially in large rivers. Here, based on the simulation of a coupled hydrological and biogeochemical process model, we track the nutrient uptake at the network scale. The model was parameterized with literature values from headwater streams and empirical measurements made in 15 rivers with varying hydrological, biological, and topographic characteristics. We applied the coupled model to an agricultural catchment in the Midwest to estimate the residence time, reaction time and travel distance of the nutrient exported from different places across watershed. In this work, we explore how to use these temporal and spatial characteristics to quantify the nutrient removal across the river network. We then further investigate the impact of heterogeneous lateral input on network scale nutrient removal. Whether or not this would influence the overall nutrient removal in the watershed, if so, to what extent would this have significant impact?

  13. Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; hide

    2017-01-01

    This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.

  14. Study of different deposition parameterizations on an atmospheric mesoscale Eulerian air quality model: Madrid case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    San Jose, R.; Cortes, J.; Moreno, J.

    1996-12-31

    The importance of an adequate parameterization of the deposition process for the simulation of the three dimensional pollution fields in a mesoscale context is out of any doubt. An accurate parameterization of the deposition flux is essential for a precise determination of the flux removal and for allowing longer simulation periods of the atmospheric processes. In addition, an accurate deposition pattern will allow a much more precise diagnostic of the impact of different pollutants on the different types of terrain actually present in complex environments such as the urban ones and their environs. In this contribution, we have implemented amore » complex resistance deposition model into an Air Quality System (ANA) applied over a large city such as Madrid (Spain). The model domain is 80x100 km which is much larger than the actual urban domain. The ANA model is composed on four different modules; a meteorological module which solves numerically the Navier Stokes equations and predicts the wind, temperature and humidity three dimensional fields every time step; the emission module, which produces the emissions every hour and with a high spatial resolution (250 x 250 m) and with landuse information (for biogenic emissions) from the Landsat-5 satellite image; a photochemical modules, which is based on the CBM-IV mechanism and solved numerically by following the SMVGEAR method and finally, a deposition module which is based on the resistance approach. The resistance module takes into account the landuse classification, the global solar radiation, the humidity of the terrain, the pH of the terrain, the characteristics of the pollutant, the Leaf Area Index and the reactivity of the pollutant.« less

  15. Healing X-ray scattering images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jiliang; Lhermitte, Julien; Tian, Ye

    X-ray scattering images contain numerous gaps and defects arising from detector limitations and experimental configuration. Here, we present a method to heal X-ray scattering images, filling gaps in the data and removing defects in a physically meaningful manner. Unlike generic inpainting methods, this method is closely tuned to the expected structure of reciprocal-space data. In particular, we exploit statistical tests and symmetry analysis to identify the structure of an image; we then copy, average and interpolate measured data into gaps in a way that respects the identified structure and symmetry. Importantly, the underlying analysis methods provide useful characterization of structuresmore » present in the image, including the identification of diffuseversussharp features, anisotropy and symmetry. The presented method leverages known characteristics of reciprocal space, enabling physically reasonable reconstruction even with large image gaps. The method will correspondingly fail for images that violate these underlying assumptions. The method assumes point symmetry and is thus applicable to small-angle X-ray scattering (SAXS) data, but only to a subset of wide-angle data. Our method succeeds in filling gaps and healing defects in experimental images, including extending data beyond the original detector borders.« less

  16. Healing X-ray scattering images

    DOE PAGES

    Liu, Jiliang; Lhermitte, Julien; Tian, Ye; ...

    2017-05-24

    X-ray scattering images contain numerous gaps and defects arising from detector limitations and experimental configuration. Here, we present a method to heal X-ray scattering images, filling gaps in the data and removing defects in a physically meaningful manner. Unlike generic inpainting methods, this method is closely tuned to the expected structure of reciprocal-space data. In particular, we exploit statistical tests and symmetry analysis to identify the structure of an image; we then copy, average and interpolate measured data into gaps in a way that respects the identified structure and symmetry. Importantly, the underlying analysis methods provide useful characterization of structuresmore » present in the image, including the identification of diffuseversussharp features, anisotropy and symmetry. The presented method leverages known characteristics of reciprocal space, enabling physically reasonable reconstruction even with large image gaps. The method will correspondingly fail for images that violate these underlying assumptions. The method assumes point symmetry and is thus applicable to small-angle X-ray scattering (SAXS) data, but only to a subset of wide-angle data. Our method succeeds in filling gaps and healing defects in experimental images, including extending data beyond the original detector borders.« less

  17. Towards a new parameterization of ice particles growth

    NASA Astrophysics Data System (ADS)

    Krakovska, Svitlana; Khotyayintsev, Volodymyr; Bardakov, Roman; Shpyg, Vitaliy

    2017-04-01

    Ice particles are the main component of polar clouds, unlike in warmer regions. That is why correct representation of ice particle formation and growth in NWP and other numerical atmospheric models is crucial for understanding of the whole chain of water transformation, including precipitation formation and its further deposition as snow in polar glaciers. Currently, parameterization of ice in atmospheric models is among the most difficult challenges. In the presented research, we present a renewed theoretical analysis of the evolution of mixed cloud or cold fog from the moment of ice nuclei activation until complete crystallization. The simplified model is proposed that includes both supercooled cloud droplets and initially uniform particles of ice, as well as water vapor. We obtain independent dimensionless input parameters of a cloud, and find main scenarios and stages of evolution of the microphysical state of the cloud. The characteristic times and particle sizes have been found, as well as the peculiarities of microphysical processes at each stage of evolution. In the future, the proposed original and physically grounded approximations may serve as a basis for a new scientifically substantiated and numerically efficient parameterizations of microphysical processes in mixed clouds for modern atmospheric models. The relevance of theoretical analysis is confirmed by numerical modeling for a wide range of combinations of possible conditions in the atmosphere, including cold polar regions. The main conclusion of the research is that until complete disappearance of cloud droplets, the growth of ice particles occurs at a practically constant humidity corresponding to the saturated humidity over water, regardless to all other parameters of a cloud. This process can be described by the one differential equation of the first order. Moreover, a dimensionless parameter has been proposed as a quantitative criterion of a transition from dominant depositional to intense

  18. New particle dependant parameterizations of heterogeneous freezing processes.

    NASA Astrophysics Data System (ADS)

    Diehl, Karoline; Mitra, Subir K.

    2014-05-01

    For detailed investigations of cloud microphysical processes an adiabatic air parcel model with entrainment is used. It represents a spectral bin model which explicitly solves the microphysical equations. The initiation of the ice phase is parameterized and describes the effects of different types of ice nuclei (mineral dust, soot, biological particles) in immersion, contact, and deposition modes. As part of the research group INUIT (Ice Nuclei research UnIT), existing parameterizations have been modified for the present studies and new parameterizations have been developed mainly on the basis of the outcome of INUIT experiments. Deposition freezing in the model is dependant on the presence of dry particles and on ice supersaturation. The description of contact freezing combines the collision kernel of dry particles with the fraction of frozen drops as function of temperature and particle size. A new parameterization of immersion freezing has been coupled to the mass of insoluble particles contained in the drops using measured numbers of ice active sites per unit mass. Sensitivity studies have been performed with a convective temperature and dew point profile and with two dry aerosol particle number size distributions. Single and coupled freezing processes are studied with different types of ice nuclei (e.g., bacteria, illite, kaolinite, feldspar). The strength of convection is varied so that the simulated cloud reaches different levels of temperature. As a parameter to evaluate the results the ice water fraction is selected which is defined as the relation of the ice water content to the total water content. Ice water fractions between 0.1 and 0.9 represent mixed-phase clouds, larger than 0.9 ice clouds. The results indicate the sensitive parameters for the formation of mixed-phase and ice clouds are: 1. broad particle number size distribution with high number of small particles, 2. temperatures below -25°C, 3. specific mineral dust particles as ice nuclei such

  19. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS

    EPA Science Inventory

    The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...

  20. The effects of surface evaporation parameterizations on climate sensitivity to solar constant variations

    NASA Technical Reports Server (NTRS)

    Chou, S.-H.; Curran, R. J.; Ohring, G.

    1981-01-01

    The effects of two different evaporation parameterizations on the sensitivity of simulated climate to solar constant variations are investigated by using a zonally averaged climate model. One parameterization is a nonlinear formulation in which the evaporation is nonlinearly proportional to the sensible heat flux, with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface (model A). The other is the formulation of Saltzman (1968) with the evaporation linearly proportional to the sensible heat flux (model B). The computed climates of models A and B are in good agreement except for the energy partition between sensible and latent heat at the earth's surface. The difference in evaporation parameterizations causes a difference in the response of temperature lapse rate to solar constant variations and a difference in the sensitivity of longwave radiation to surface temperature which leads to a smaller sensitivity of surface temperature to solar constant variations in model A than in model B. The results of model A are qualitatively in agreement with those of the general circulation model calculations of Wetherald and Manabe (1975).

  1. New Approaches to Parameterizing Convection

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Lappen, Cara-Lyn

    1999-01-01

    Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.

  2. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    NASA Astrophysics Data System (ADS)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare

  3. Systematic Parameterization of Lignin for the CHARMM Force Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermaas, Joshua; Petridis, Loukas; Beckham, Gregg

    Plant cell walls have three primary components, cellulose, hemicellulose, and lignin, the latter of which is a recalcitrant, aromatic heteropolymer that provides structure to plants, water and nutrient transport through plant tissues, and a highly effective defense against pathogens. Overcoming the recalcitrance of lignin is key to effective biomass deconstruction, which would in turn enable the use of biomass as a feedstock for industrial processes. Our understanding of lignin structure in the plant cell wall is hampered by the limitations of the available lignin forcefields, which currently only account for a single linkage between lignins and lack explicit parameterization formore » emerging lignin structures both from natural variants and engineered lignin structures. Since polymerization of lignin occurs via radical intermediates, multiple C-O and C-C linkages have been isolated , and the current force field only represents a small subset of lignin the diverse lignin structures found in plants. In order to take into account the wide range of lignin polymerization chemistries, monomers and dimer combinations of C-, H-, G-, and S-lignins as well as with hydroxycinnamic acid linkages were subjected to extensive quantum mechanical calculations to establish target data from which to build a complete molecular mechanics force field tuned specifically for diverse lignins. This was carried out in a GPU-accelerated global optimization process, whereby all molecules were parameterized simultaneously using the same internal parameter set. By parameterizing lignin specifically, we are able to more accurately represent the interactions and conformations of lignin monomers and dimers relative to a general force field. This new force field will enables computational researchers to study the effects of different linkages on the structure of lignin, as well as construct more accurate plant cell wall models based on observed statistical distributions of lignin that differ

  4. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  5. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  6. Measurement of Scattering and Absorption Cross Sections of Dyed Microspheres

    PubMed Central

    Gaigalas, Adolfas K; Choquette, Steven; Zhang, Yu-Zhong

    2013-01-01

    Measurements of absorbance and fluorescence emission were carried out on aqueous suspensions of polystyrene (PS) microspheres with a diameter of 2.5 µm using a spectrophotometer with an integrating sphere detector. The apparatus and the principles of measurements were described in our earlier publications. Microspheres with and without green BODIPY@ dye were measured. Placing the suspension inside an integrating sphere (IS) detector of the spectrophotometer yielded (after a correction for fluorescence emission) the absorbance (called A in the text) due to absorption by BODIPY@ dye inside the microsphere. An estimate of the absorbance due to scattering alone was obtained by subtracting the corrected BODIPY@ dye absorbance (A) from the measured absorbance of a suspension placed outside the IS detector (called A1 in the text). The absorption of the BODIPY@ dye inside the microsphere was analyzed using an imaginary index of refraction parameterized with three Gaussian-Lorentz functions. The Kramer-Kronig relation was used to estimate the contribution of the BODIPY@ dye to the real part of the microsphere index of refraction. The complex index of refraction, obtained from the analysis of A, was used to analyze the absorbance due to scattering ((A1- A) in the text). In practice, the analysis of the scattering absorbance, A1-A, and the absorbance, A, was carried out in an iterative manner. It was assumed that A depended primarily on the imaginary part of the microsphere index of refraction with the other parameters playing a secondary role. Therefore A was first analyzed using values of the other parameters obtained from a fit to the absorbance due to scattering, A1-A, with the imaginary part neglected. The imaginary part obtained from the analysis of A was then used to reanalyze A1-A, and obtain better estimates of the other parameters. After a few iterations, consistent estimates were obtained of the scattering and absorption cross sections in the wavelength region 300

  7. Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations

    NASA Technical Reports Server (NTRS)

    Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide

    2017-01-01

    Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only

  8. A review of recent research on improvement of physical parameterizations in the GLA GCM

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.

    1990-01-01

    A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.

  9. Migration of scattered teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Rondenay, S.

    1999-06-01

    The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.

  10. Comparison of Gravity Wave Temperature Variances from Ray-Based Spectral Parameterization of Convective Gravity Wave Drag with AIRS Observations

    NASA Technical Reports Server (NTRS)

    Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.

    2012-01-01

    The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.

  11. Intraocular light scatter, reflections, fluorescence and absorption: what we see in the slit lamp.

    PubMed

    van den Berg, Thomas J T P

    2018-01-01

    Much knowledge has been collected over the past 20 years about light scattering in the eye- in particular in the eye lens- and its visual effect, called straylight. It is the purpose of this review to discuss how these insights can be applied to understanding the slit lamp image. The slit lamp image mainly results from back scattering, whereas the effects on vision result mainly from forward scatter. Forward scatter originates from particles of about wavelength size distributed throughout the lens. Most of the slit lamp image originates from small particle scatter (Rayleigh scatter). For a population of middle aged lenses it will be shown that both these scatter components remove around 10% of the light from the direct beam. For slit lamp observation close to the reflection angles, zones of discontinuity (Wasserspalten) at anterior and posterior parts of the lens show up as rough surface reflections. All these light scatter effects increase with age, but the correlations with age, and also between the different components, are weak. For retro-illumination imaging it will be argued that the density or opacity seen in areas of cortical or posterior subcapsular cataract show up because of light scattering, not because of light loss. NOTES: (1) Light scatter must not be confused with aberrations. Light penetrating the eye is divided into two parts: a relatively small part is scattered, and removed from the direct beam. Most of the light is not scattered, but continues as the direct beam. This non-scattered part is the basis for functional imaging, but its quality is under the control of aberrations. Aberrations deflect light mainly over small angles (<1°), whereas light scatter is important because of the straylight effects over large angles (>1°), causing problems like glare and hazy vision. (2) The slit lamp image in older lenses and nuclear cataract is strongly influenced by absorption. However, this effect is greatly exaggerated by the light path lengths

  12. Radiative flux and forcing parameterization error in aerosol-free clear skies.

    PubMed

    Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M

    2015-07-16

    Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.

  13. Universal Parameterization of Absorption Cross Sections

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Cucinotta, Francis A.; Wilson, John W.

    1997-01-01

    This paper presents a simple universal parameterization of total reaction cross sections for any system of colliding nuclei that is valid for the entire energy range from a few AMeV to a few AGeV. The universal picture presented here treats proton-nucleus collision as a special case of nucleus-nucleus collision, where the projectile has charge and mass number of one. The parameters are associated with the physics of the collision system. In general terms, Coulomb interaction modifies cross sections at lower energies, and the effects of Pauli blocking are important at higher energies. The agreement between the calculated and experimental data is better than all earlier published results.

  14. Parameterization of ALMANAC crop simulation model for non-irrigated dry bean in semi-arid temperate areas in Mexico

    USDA-ARS?s Scientific Manuscript database

    Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...

  15. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  16. Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.

    PubMed

    Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong

    2017-12-01

    Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.

  17. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  18. Electronegativity equalization method: parameterization and validation for organic molecules using the Merz-Kollman-Singh charge distribution scheme.

    PubMed

    Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav

    2009-05-01

    The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.

  19. Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model

    NASA Technical Reports Server (NTRS)

    Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.

    2014-01-01

    The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.

  20. Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wu, W.; Yang, Q.

    2017-12-01

    Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.

  1. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  2. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Inelastic black hole scattering from charged scalar amplitudes

    NASA Astrophysics Data System (ADS)

    Luna, Andrés; Nicholson, Isobel; O'Connell, Donal; White, Chris D.

    2018-03-01

    We explain how the lowest-order classical gravitational radiation produced during the inelastic scattering of two Schwarzschild black holes in General Relativity can be obtained from a tree scattering amplitude in gauge theory coupled to scalar fields. The gauge calculation is related to gravity through the double copy. We remove unwanted scalar forces which can occur in the double copy by introducing a massless scalar in the gauge theory, which is treated as a ghost in the link to gravity. We hope these methods are a step towards a direct application of the double copy at higher orders in classical perturbation theory, with the potential to greatly streamline gravity calculations for phenomenological applications.

  4. The parameterization of the planetary boundary layer in the UCLA general circulation model - Formulation and results

    NASA Technical Reports Server (NTRS)

    Suarez, M. J.; Arakawa, A.; Randall, D. A.

    1983-01-01

    A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.

  5. Development of a two-dimensional zonally averaged statistical-dynamical model. III - The parameterization of the eddy fluxes of heat and moisture

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Yao, Mao-Sung

    1990-01-01

    A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.

  6. Multiscale Modeling of Grain-Boundary Fracture: Cohesive Zone Models Parameterized From Atomistic Simulations

    NASA Technical Reports Server (NTRS)

    Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin

    2006-01-01

    A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.

  7. Fully parameterized model of a voltage-driven capacitive coupled micromachined ohmic contact switch for RF applications

    NASA Astrophysics Data System (ADS)

    Heeb, Peter; Tschanun, Wolfgang; Buser, Rudolf

    2012-03-01

    A comprehensive and completely parameterized model is proposed to determine the related electrical and mechanical dynamic system response of a voltage-driven capacitive coupled micromechanical switch. As an advantage over existing parameterized models, the model presented in this paper returns within few seconds all relevant system quantities necessary to design the desired switching cycle. Moreover, a sophisticated and detailed guideline is given on how to engineer a MEMS switch. An analytical approach is used throughout the modelling, providing representative coefficients in a set of two coupled time-dependent differential equations. This paper uses an equivalent mass moving along the axis of acceleration and a momentum absorption coefficient. The model describes all the energies transferred: the energy dissipated in the series resistor that models the signal attenuation of the bias line, the energy dissipated in the squeezed film, the stored energy in the series capacitor that represents a fixed separation in the bias line and stops the dc power in the event of a short circuit between the RF and dc path, the energy stored in the spring mechanism, and the energy absorbed by mechanical interaction at the switch contacts. Further, the model determines the electrical power fed back to the bias line. The calculated switching dynamics are confirmed by the electrical characterization of the developed RF switch. The fabricated RF switch performs well, in good agreement with the modelled data, showing a transition time of 7 µs followed by a sequence of bounces. Moreover, the scattering parameters exhibit an isolation in the off-state of >8 dB and an insertion loss in the on-state of <0.6 dB up to frequencies of 50 GHz. The presented model is intended to be integrated into standard circuit simulation software, allowing circuit engineers to design the switch bias line, to minimize induced currents and cross actuation, as well as to find the mechanical structure dimensions

  8. Parameterization of Movement Execution in Children with Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Van Waelvelde, Hilde; De Weerdt, Willy; De Cock, Paul; Janssens, Luc; Feys, Hilde; Engelsman, Bouwien C. M. Smits

    2006-01-01

    The Rhythmic Movement Test (RMT) evaluates temporal and amplitude parameterization and fluency of movement execution in a series of rhythmic arm movements under different sensory conditions. The RMT was used in combination with a jumping and a drawing task, to evaluate 36 children with Developmental Coordination Disorder (DCD) and a matched…

  9. Time-reversed ultrasonically encoded optical focusing through highly scattering ex vivo human cataractous lenses

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Shen, Yuecheng; Ruan, Haowen; Brodie, Frank L.; Wong, Terence T. W.; Yang, Changhuei; Wang, Lihong V.

    2018-01-01

    Normal development of the visual system in infants relies on clear images being projected onto the retina, which can be disrupted by lens opacity caused by congenital cataract. This disruption, if uncorrected in early life, results in amblyopia (permanently decreased vision even after removal of the cataract). Doctors are able to prevent amblyopia by removing the cataract during the first several weeks of life, but this surgery risks a host of complications, which can be equally visually disabling. Here, we investigated the feasibility of focusing light noninvasively through highly scattering cataractous lenses to stimulate the retina, thereby preventing amblyopia. This approach would allow the cataractous lens removal surgery to be delayed and hence greatly reduce the risk of complications from early surgery. Employing a wavefront shaping technique named time-reversed ultrasonically encoded optical focusing in reflection mode, we focused 532-nm light through a highly scattering ex vivo adult human cataractous lens. This work demonstrates a potential clinical application of wavefront shaping techniques.

  10. Improving microphysics in a convective parameterization: possibilities and limitations

    NASA Astrophysics Data System (ADS)

    Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak

    2017-04-01

    The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.

  11. Cloud Microphysics Parameterization in a Shallow Cumulus Cloud Simulated by a Largrangian Cloud Model

    NASA Astrophysics Data System (ADS)

    Oh, D.; Noh, Y.; Hoffmann, F.; Raasch, S.

    2017-12-01

    Lagrangian cloud model (LCM) is a fundamentally new approach of cloud simulation, in which the flow field is simulated by large eddy simulation and droplets are treated as Lagrangian particles undergoing cloud microphysics. LCM enables us to investigate raindrop formation and examine the parameterization of cloud microphysics directly by tracking the history of individual Lagrangian droplets simulated by LCM. Analysis of the magnitude of raindrop formation and the background physical conditions at the moment at which every Lagrangian droplet grows from cloud droplets to raindrops in a shallow cumulus cloud reveals how and under which condition raindrops are formed. It also provides information how autoconversion and accretion appear and evolve within a cloud, and how they are affected by various factors such as cloud water mixing ratio, rain water mixing ratio, aerosol concentration, drop size distribution, and dissipation rate. Based on these results, the parameterizations of autoconversion and accretion, such as Kessler (1969), Tripoli and Cotton (1980), Beheng (1994), and Kharioutdonov and Kogan (2000), are examined, and the modifications to improve the parameterizations are proposed.

  12. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    NASA Astrophysics Data System (ADS)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  13. Amplification of intrinsic emittance due to rough metal cathodes: Formulation of a parameterization model

    NASA Astrophysics Data System (ADS)

    Charles, T. K.; Paganin, D. M.; Dowd, R. T.

    2016-08-01

    Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.

  14. Fiber optic probe for light scattering measurements

    DOEpatents

    Nave, Stanley E.; Livingston, Ronald R.; Prather, William S.

    1995-01-01

    A fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman-scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.

  15. Fiber optic probe for light scattering measurements

    DOEpatents

    Nave, S.E.; Livingston, R.R.; Prather, W.S.

    1993-01-01

    This invention is comprised of a fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman- scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.

  16. Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation

    NASA Astrophysics Data System (ADS)

    Liu, S.; Liang, X.

    2011-12-01

    Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the

  17. Application of new parameterizations of gas transfer velocity and their impact on regional and global marine CO 2 budgets

    NASA Astrophysics Data System (ADS)

    Fangohr, Susanne; Woolf, David K.

    2007-06-01

    One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes

  18. Parameterization of Rocket Dust Storms on Mars in the LMD Martian GCM: Modeling Details and Validation

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas

    2018-04-01

    The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.

  19. Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed

    NASA Astrophysics Data System (ADS)

    Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.

    2017-12-01

    Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the

  20. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  1. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    NASA Technical Reports Server (NTRS)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  2. Examining Chaotic Convection with Super-Parameterization Ensembles

    NASA Astrophysics Data System (ADS)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  3. Parameterization of MARVELS Spectra Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Gilda, Sankalp; Ge, Jian; MARVELS

    2018-01-01

    Like many large-scale surveys, the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) was designed to operate at a moderate spectral resolution ($\\sim$12,000) for efficiency in observing large samples, which makes the stellar parameterization difficult due to the high degree of blending of spectral features. Two extant solutions to deal with this issue are to utilize spectral synthesis, and to utilize spectral indices [Ghezzi et al. 2014]. While the former is a powerful and tested technique, it can often yield strongly coupled atmospheric parameters, and often requires high spectral resolution (Valenti & Piskunov 1996). The latter, though a promising technique utilizing measurements of equivalent widths of spectral indices, has only been employed with respect to FKG dwarfs and sub-giants and not red-giant branch stars, which constitute ~30% of MARVELS targets. In this work, we tackle this problem using a convolution neural network (CNN). In particular, we train a one-dimensional CNN on appropriately processed PHOENIX synthetic spectra using supervised training to automatically distinguish the features relevant for the determination of each of the three atmospheric parameters – T_eff, log(g), [Fe/H] – and use the knowledge thus gained by the network to parameterize 849 MARVELS giants. When tested on the synthetic spectra themselves, our estimates of the parameters were consistent to within 11 K, .02 dex, and .02 dex (in terms of mean absolute errors), respectively. For MARVELS dwarfs, the accuracies are 80K, .16 dex and .10 dex, respectively.

  4. Intercomparison Project on Parameterizations of Large-Scale Dynamics for Simulations of Tropical Convection

    NASA Astrophysics Data System (ADS)

    Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.

    2013-12-01

    Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be

  5. Assessment of fine-scale parameterizations of turbulent dissipation rates in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Takahashi, A.; Hibiya, T.

    2016-12-01

    To sustain the global overturning circulation, more mixing is required in the ocean than has been observed. The most likely candidates for this missing mixing are breaking of wind-induced near-inertial waves and bottom-generated internal lee waves in the sparsely observed Southern Ocean. Nevertheless, there is a paucity of direct microstructure measurements in the Southern Ocean where energy dissipation rates have been estimated mostly using fine-scale parameterizations. In this study, we assess the validity of the existing fine-scale parameterizations in the Antarctic Circumpolar Current (ACC) region using the data obtained from simultaneous full-depth measurements of micro-scale turbulence and fine-scale shear/strain carried out south of Australia during January 17 to February 2, 2016. Although the fine-scale shear/strain ratio (Rω) is close to the Garrett-Munk (GM) value at the station north of Subtropical Front, the values of Rω at the stations south of Subantarctic Front well exceed the GM value, suggesting that the local internal wave spectra are significantly biased to lower frequencies. We find that not all of the observed energy dissipation rates at these locations are well predicted using Gregg-Henyey-Polzin (GHP; Gregg et al., 2003) and Ijichi-Hibiya (IH; Ijichi and Hibiya, 2015) parameterizations, both of which take into account the spectral distortion in terms of Rω; energy dissipation rates at some locations are obviously overestimated by GHP and IH, although only the strain-based Wijesekera (Wijesekera et al., 1993) parameterization yields fairly good predictions. One possible explanation for this result is that a significant portion of the observed shear variance at these locations might be attributed to kinetic-energy-dominant small-scale eddies associated with the ACC, so that fine-scale strain rather than Rω becomes a more appropriate parameter to characterize the actual internal wave field.

  6. Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation

    PubMed Central

    Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton

    2016-01-01

    Abstract A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model‐dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model‐dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low‐level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales. PMID:27668040

  7. Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation

    NASA Astrophysics Data System (ADS)

    Sandu, Irina; Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton

    2016-03-01

    A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model-dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model-dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low-level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales.

  8. Ocean color modeling: Parameterization and interpretation

    NASA Astrophysics Data System (ADS)

    Feng, Hui

    The ocean color as observed near the water surface is determined mainly by dissolved and particulate substances, known as "optically-active constituents," in the upper water column. The goal of ocean color modeling is to interpret an ocean color spectrum quantitatively to estimate the suite of optically-active constituents near the surface. In recent years, ocean color modeling efforts have been centering upon three major optically-active constituents: chlorophyll concentration, colored dissolved organic matter, and scattering particulates. Many challenges are still being faced in this arena. This thesis generally addresses and deals with some critical issues in ocean color modeling. In chapter one, an extensive literature survey on ocean color modeling is given. A general ocean color model is presented to identify critical candidate uncertainty sources in modeling the ocean color. The goal for this thesis study is then defined as well as some specific objectives. Finally, a general overview of the dissertation is portrayed, defining each of the follow-up chapters to target some relevant objectives. In chapter two, a general approach is presented to quantify constituent concentration retrieval errors induced by uncertainties in inherent optical property (IOP) submodels of a semi-analytical forward model. Chlorophyll concentrations are retrieved by inverting a forward model with nonlinear IOPs. The study demonstrates how uncertainties in individual IOP submodels influence the accuracy of the chlorophyll concentration retrieval at different chlorophyll concentration levels. The important finding for this study shows that precise knowledge of spectral shapes of IOP submodels is critical for accurate chlorophyll retrieval, suggesting an improvement in retrieval accuracy requires precise spectral IOP measurements. In chapter three, three distinct inversion techniques, namely, nonlinear optimization (NLO), principal component analysis (PCA) and artificial neural network

  9. Aerosol hygroscopic growth parameterization based on a solute specific coefficient

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.

    2011-09-01

    Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.

  10. Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.

    PubMed

    Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) < 18% for both calibration and validation. The RCM could not simulate sediment loss (NSE < 0, |PBIAS| > 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Mixed Layer Sub-Mesoscale Parameterization - Part 1: Derivation and Assessment

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Dubovikov, M. S.

    2010-01-01

    Several studies have shown that sub-mesoscales (SM 1km horizontal scale) play an important role in mixed layer dynamics. In particular, high resolution simulations have shown that in the case of strong down-front wind, the re-stratification induced by the SM is of the same order of the de-stratification induced by small scale turbulence, as well as of that induced by the Ekman velocity. These studies have further concluded that it has become necessary to include SM in ocean global circulation models (OGCMs), especially those used in climate studies. The goal of our work is to derive and assess an analytic parameterization of the vertical tracer flux under baroclinic instabilities and wind of arbitrary directions and strength. To achieve this goal, we have divided the problem into two parts: first, in this work we derive and assess a parameterization of the SM vertical flux of an arbitrary tracer for ocean codes that resolve mesoscales, M, but not sub-mesoscales, SM. In Part 2, presented elsewhere, we have used the results of this work to derive a parameterization of SM fluxes for ocean codes that do not resolve either M or SM. To carry out the first part of our work, we solve the SM dynamic equations including the non-linear terms for which we employ a closure developed and assessed in previous work. We present a detailed analysis for down-front and up-front winds with the following results: (a) down-front wind (blowing in the direction of the surface geostrophic velocity) is the most favorable condition for generating vigorous SM eddies; the de-stratifying effect of the mean flow and re-stratifying effect of SM almost cancel each other out,

  12. Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model

    NASA Astrophysics Data System (ADS)

    Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno

    2018-06-01

    A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.

  13. New Parameterization of Neutron Absorption Cross Sections

    NASA Technical Reports Server (NTRS)

    Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.

    1997-01-01

    Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.

  14. The beam stop array method to measure object scatter in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook

    2014-03-01

    Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.

  15. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  16. Effective Atomic Number, Mass Attenuation Coefficient Parameterization, and Implications for High-Energy X-Ray Cargo Inspection Systems

    NASA Astrophysics Data System (ADS)

    Langeveld, Willem G. J.

    The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.

  17. Evaluation of surface layer flux parameterizations using in-situ observations

    NASA Astrophysics Data System (ADS)

    Katz, Jeremy; Zhu, Ping

    2017-09-01

    Appropriate calculation of surface turbulent fluxes between the atmosphere and the underlying ocean/land surface is one of the major challenges in geosciences. In practice, the surface turbulent fluxes are estimated from the mean surface meteorological variables based on the bulk transfer model combined with the Monnin-Obukhov Similarity (MOS) theory. Few studies have been done to examine the extent to which such a flux parameterization can be applied to different weather and surface conditions. A novel validation method is developed in this study to evaluate the surface flux parameterization using in-situ observations collected at a station off the coast of Gulf of Mexico. The main findings are: (a) the theoretical prediction that uses MOS theory does not match well with those directly computed from the observations. (b) The largest spread in exchange coefficients is shown in strong stable conditions with calm winds. (c) Large turbulent eddies, which depend strongly on the mean flow pattern and surface conditions, tend to break the constant flux assumption in the surface layer.

  18. An improved ice cloud formation parameterization in the EMAC model

    NASA Astrophysics Data System (ADS)

    Bacer, Sara; Pozzer, Andrea; Karydis, Vlassis; Tsimpidi, Alexandra; Tost, Holger; Sullivan, Sylvia; Nenes, Athanasios; Barahona, Donifan; Lelieveld, Jos

    2017-04-01

    Cirrus clouds cover about 30% of the Earth's surface and are an important modulator of the radiative energy budget of the atmosphere. Despite their importance in the global climate system, there are still large uncertainties in understanding the microphysical properties and interactions with aerosols. Ice crystal formation is quite complex and a variety of mechanisms exists for ice nucleation, depending on aerosol characteristics and environmental conditions. Ice crystals can be formed via homogeneous nucleation or heterogeneous nucleation of ice-nucleating particles in different ways (contact, immersion, condensation, deposition). We have implemented the computationally efficient cirrus cloud formation parameterization by Barahona and Nenes (2009) into the EMAC (ECHAM5/MESSy Atmospheric Chemistry) model in order to improve the representation of ice clouds and aerosol-cloud interactions. The parameterization computes the ice crystal number concentration from precursor aerosols and ice-nucleating particles accounting for the competition between homogeneous and heterogeneous nucleation and among different freezing modes. Our work shows the differences and the improvements obtained after the implementation with respect to the previous version of EMAC.

  19. Computer program for parameterization of nucleus-nucleus electromagnetic dissociation cross sections

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Townsend, Lawrence W.; Badavi, Forooz F.

    1988-01-01

    A computer subroutine parameterization of electromagnetic dissociation cross sections for nucleus-nucleus collisions is presented that is suitable for implementation in a heavy ion transport code. The only inputs required are the projectile kinetic energy and the projectile and target charge and mass numbers.

  20. The applicability of the viscous α-parameterization of gravitational instability in circumstellar disks

    NASA Astrophysics Data System (ADS)

    Vorobyov, E. I.

    2010-01-01

    We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve

  1. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  2. Global direct radiative forcing by process-parameterized aerosol optical properties

    NASA Astrophysics Data System (ADS)

    KirkevâG, Alf; Iversen, Trond

    2002-10-01

    A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.

  3. Parameterization of light absorption by components of seawater in optically complex coastal waters of the Crimea Peninsula (Black Sea).

    PubMed

    Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K

    2009-03-01

    The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).

  4. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Liu, Yangang

    2014-12-18

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less

  5. Harmonic motion detection in a vibrating scattering medium.

    PubMed

    Urban, Matthew W; Chen, Shigao; Greenleaf, James

    2008-09-01

    Elasticity imaging is an emerging medical imaging modality that seeks to map the spatial distribution of tissue stiffness. Ultrasound radiation force excitation and motion tracking using pulse-echo ultrasound have been used in numerous methods. Dynamic radiation force is used in vibrometry to cause an object or tissue to vibrate, and the vibration amplitude and phase can be measured with exceptional accuracy. This paper presents a model that simulates harmonic motion detection in a vibrating scattering medium incorporating 3-D beam shapes for radiation force excitation and motion tracking. A parameterized analysis using this model provides a platform to optimize motion detection for vibrometry applications in tissue. An experimental method that produces a multifrequency radiation force is also presented. Experimental harmonic motion detection of simultaneous multifrequency vibration is demonstrated using a single transducer. This method can accurately detect motion with displacement amplitude as low as 100 to 200 nm in bovine muscle. Vibration phase can be measured within 10 degrees or less. The experimental results validate the conclusions observed from the model and show multifrequency vibration induction and measurements can be performed simultaneously.

  6. A Generalized Simple Formulation of Convective Adjustment Timescale for Cumulus Convection Parameterizations

    EPA Science Inventory

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...

  7. Improving Parameterization of Entrainment Rate for Shallow Convection with Aircraft Measurements and Large-Eddy Simulation

    DOE PAGES

    Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; ...

    2016-02-01

    This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less

  8. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are

  9. CCPP-ARM Parameterization Testbed Model Forecast Data

    DOE Data Explorer

    Klein, Stephen

    2008-01-15

    Dataset contains the NCAR CAM3 (Collins et al., 2004) and GFDL AM2 (GFDL GAMDT, 2004) forecast data at locations close to the ARM research sites. These data are generated from a series of multi-day forecasts in which both CAM3 and AM2 are initialized at 00Z every day with the ECMWF reanalysis data (ERA-40), for the year 1997 and 2000 and initialized with both the NASA DAO Reanalyses and the NCEP GDAS data for the year 2004. The DOE CCPP-ARM Parameterization Testbed (CAPT) project assesses climate models using numerical weather prediction techniques in conjunction with high quality field measurements (e.g. ARM data).

  10. The response of the SSM/I to the marine environment. Part 2: A parameterization of the effect of the sea surface slope distribution on emission and reflection

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Katsaros, Kristina B.

    1994-01-01

    Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.

  11. Internal wave scattering in continental slope canyons, part 1: Theory and development of a ray tracing algorithm

    NASA Astrophysics Data System (ADS)

    Nazarian, Robert H.; Legg, Sonya

    2017-10-01

    When internal waves interact with topography, such as continental slopes, they can transfer wave energy to local dissipation and diapycnal mixing. Submarine canyons comprise approximately ten percent of global continental slopes, and can enhance the local dissipation of internal wave energy, yet parameterizations of canyon mixing processes are currently missing from large-scale ocean models. As a first step in the development of such parameterizations, we conduct a parameter space study of M2 tidal-frequency, low-mode internal waves interacting with idealized V-shaped canyon topographies. Specifically, we examine the effects of varying the canyon mouth width, shape and slope of the thalweg (line of lowest elevation). This effort is divided into two parts. In the first part, presented here, we extend the theory of 3-dimensional internal wave reflection to a rotated coordinate system aligned with our idealized V-shaped canyons. Based on the updated linear internal wave reflection solution that we derive, we construct a ray tracing algorithm which traces a large number of rays (the discrete analog of a continuous wave) into the canyon region where they can scatter off topography. Although a ray tracing approach has been employed in other studies, we have, for the first time, used ray tracing to calculate changes in wavenumber and ray density which, in turn, can be used to calculate the Froude number (a measure of the likelihood of instability). We show that for canyons of intermediate aspect ratio, large spatial envelopes of instability can form in the presence of supercritical sidewalls. Additionally, the canyon height and length can modulate the Froude number. The second part of this study, a diagnosis of internal wave scattering in continental slope canyons using both numerical simulations and this ray tracing algorithm, as well as a test of robustness of the ray tracing, is presented in the companion article.

  12. Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu

    2013-11-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.

  13. Scaling Dissolved Nutrient Removal in River Networks: A Comparative Modeling Investigation

    NASA Astrophysics Data System (ADS)

    Ye, Sheng; Reisinger, Alexander J.; Tank, Jennifer L.; Baker, Michelle A.; Hall, Robert O.; Rosi, Emma J.; Sivapalan, Murugesu

    2017-11-01

    Along the river network, water, sediment, and nutrients are transported, cycled, and altered by coupled hydrological and biogeochemical processes. Our current understanding of the rates and processes controlling the cycling and removal of dissolved inorganic nutrients in river networks is limited due to a lack of empirical measurements in large, (nonwadeable), rivers. The goal of this paper was to develop a coupled hydrological and biogeochemical process model to simulate nutrient uptake at the network scale during summer base flow conditions. The model was parameterized with literature values from headwater streams, and empirical measurements made in 15 rivers with varying hydrological, biological, and topographic characteristics, to simulate nutrient uptake at the network scale. We applied the coupled model to 15 catchments describing patterns in uptake for three different solutes to determine the role of rivers in network-scale nutrient cycling. Model simulation results, constrained by empirical data, suggested that rivers contributed proportionally more to nutrient removal than headwater streams given the fraction of their length represented in a network. In addition, variability of nutrient removal patterns among catchments was varied among solutes, and as expected, was influenced by nutrient concentration and discharge. Net ammonium uptake was not significantly correlated with any environmental descriptor. In contrast, net daily nitrate removal was linked to suspended chlorophyll a (an indicator of primary producers) and land use characteristics. Finally, suspended sediment characteristics and agricultural land use were correlated with net daily removal of soluble reactive phosphorus, likely reflecting abiotic sorption dynamics. Rivers are understudied relative to streams, and our model suggests that rivers can contribute more to network-scale nutrient removal than would be expected based upon their representative fraction of network channel length.

  14. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    NASA Astrophysics Data System (ADS)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  15. Measured and parameterized energy fluxes estimated for Atlantic transects of RV Polarstern

    NASA Astrophysics Data System (ADS)

    Bumke, Karl; Macke, Andreas; Kalisch, John; Kleta, Henry

    2013-04-01

    Even to date energy fluxes over the oceans are difficult to assess. As an example the relative paucity of evaporation observations and the uncertainties of currently employed empirical approaches lead to large uncertainties of evaporation products over the ocean (e.g. Large and Yeager, 2009). Within the frame of OCEANET (Macke et al., 2010) we performed such measurements on Atlantic transects between Bremerhaven (Germany) and Cape Town (South Africa) or Punta Arenas (Chile) onboard RV Polarstern during the recent years. The basic measurements of sensible and latent heat fluxes are inertial-dissipation (e.g. Dupuis et al., 1997) flux estimates and measurements of the bulk variables. Turbulence measurements included a sonic anemometer and an infrared hygrometer, both mounted on the crow's nest. Mean meteorological sensors were those of the ship's operational measurement system. The global radiation and the down terrestrial radiation were measured on the OCEANET container placed on the monkey island. At least about 1000 time series of 1 h length were analyzed to derive bulk transfer coefficients for the fluxes of sensible and latent heat. The bulk transfer coefficients were applied to the ship's meteorological data to derive the heat fluxes at the sea surface. The reflected solar radiation was estimated from measured global radiation. The up terrestrial radiation was derived from the skin temperature according to the Stefan-Boltzmann law. Parameterized heat fluxes were compared to the widely used COARE-parameterization (Fairall et al., 2003), the agreement is excellent. Measured and parameterized heat and radiation fluxes gave the total energy budget at the air sea interface. As expected the mean total flux is positive, but there are also areas, where it is negative, indicating an energy loss of the ocean. It could be shown that the variations in the energy budget are mainly due to insolation and evaporation. A comparison between the mean values of measured and

  16. STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu

    2016-12-10

    Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imagingmore » come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.« less

  17. Infrared radiation parameterizations for the minor CO2 bands and for several CFC bands in the window region

    NASA Technical Reports Server (NTRS)

    Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.

    1993-01-01

    Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.

  18. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less

  19. A simple parameterization of aerosol emissions in RAMS

    NASA Astrophysics Data System (ADS)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical

  20. The rational parameterization theorem for multisite post-translational modification systems.

    PubMed

    Thomson, Matthew; Gunawardena, Jeremy

    2009-12-21

    Post-translational modification of proteins plays a central role in cellular regulation but its study has been hampered by the exponential increase in substrate modification forms ("modforms") with increasing numbers of sites. We consider here biochemical networks arising from post-translational modification under mass-action kinetics, allowing for multiple substrates, having different types of modification (phosphorylation, methylation, acetylation, etc.) on multiple sites, acted upon by multiple forward and reverse enzymes (in total number L), using general enzymatic mechanisms. These assumptions are substantially more general than in previous studies. We show that the steady-state modform concentrations constitute an algebraic variety that can be parameterized by rational functions of the L free enzyme concentrations, with coefficients which are rational functions of the rate constants. The parameterization allows steady states to be calculated by solving L algebraic equations, a dramatic reduction compared to simulating an exponentially large number of differential equations. This complexity collapse enables analysis in contexts that were previously intractable and leads to biological predictions that we review. Our results lay a foundation for the systems biology of post-translational modification and suggest deeper connections between biochemical networks and algebraic geometry.

  1. Structural test of the parameterized-backbone method for protein design.

    PubMed

    Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom

    2004-09-03

    Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.

  2. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    PubMed

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  3. Linearized inversion of multiple scattering seismic energy

    NASA Astrophysics Data System (ADS)

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad

    2014-05-01

    curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.

  4. Introducing Convective Cloud Microphysics to a Deep Convection Parameterization Facilitating Aerosol Indirect Effects

    NASA Astrophysics Data System (ADS)

    Alapaty, K.; Zhang, G. J.; Song, X.; Kain, J. S.; Herwehe, J. A.

    2012-12-01

    Short lived pollutants such as aerosols play an important role in modulating not only the radiative balance but also cloud microphysical properties and precipitation rates. In the past, to understand the interactions of aerosols with clouds, several cloud-resolving modeling studies were conducted. These studies indicated that in the presence of anthropogenic aerosols, single-phase deep convection precipitation is reduced or suppressed. On the other hand, anthropogenic aerosol pollution led to enhanced precipitation for mixed-phase deep convective clouds. To date, there have not been many efforts to incorporate such aerosol indirect effects (AIE) in mesoscale models or global models that use parameterization schemes for deep convection. Thus, the objective of this work is to implement a diagnostic cloud microphysical scheme directly into a deep convection parameterization facilitating aerosol indirect effects in the WRF-CMAQ integrated modeling systems. Major research issues addressed in this study are: What is the sensitivity of a deep convection scheme to cloud microphysical processes represented by a bulk double-moment scheme? How close are the simulated cloud water paths as compared to observations? Does increased aerosol pollution lead to increased precipitation for mixed-phase clouds? These research questions are addressed by performing several WRF simulations using the Kain-Fritsch convection parameterization and a diagnostic cloud microphysical scheme. In the first set of simulations (control simulations) the WRF model is used to simulate two scenarios of deep convection over the continental U.S. during two summer periods at 36 km grid resolution. In the second set, these simulations are repeated after incorporating a diagnostic cloud microphysical scheme to study the impacts of inclusion of cloud microphysical processes. Finally, in the third set, aerosol concentrations simulated by the CMAQ modeling system are supplied to the embedded cloud microphysical

  5. New Concepts for Refinement of Cumulus Parameterization in GCM's the Arakawa-Schubert Framework

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.; Lau, William (Technical Monitor)

    2002-01-01

    Several state-of-the-art models including the one employed in this study use the Arakawa-Schubert framework for moist convection, and Sundqvist formulation of stratiform. clouds, for moist physics, in-cloud condensation, and precipitation. Despite a variety of cloud parameterization methodologies developed by several modelers including the authors, most of the parameterized cloud-models have similar deficiencies. These consist of: (a) not enough shallow clouds, (b) too many deep clouds; (c) several layers of clouds in a vertically demoralized model as opposed to only a few levels of observed clouds, and (d) higher than normal incidence of double ITCZ (Inter-tropical Convergence Zone). Even after several upgrades consisting of a sophisticated cloud-microphysics and sub-grid scale orographic precipitation into the Data Assimilation Office (DAO)'s atmospheric model (called GEOS-2 GCM) at two different resolutions, we found that the above deficiencies remained persistent. The two empirical solutions often used to counter the aforestated deficiencies consist of a) diffusion of moisture and heat within the lower troposphere to artificially force the shallow clouds; and b) arbitrarily invoke evaporation of in-cloud water for low-level clouds. Even though helpful, these implementations lack a strong physical rationale. Our research shows that two missing physical conditions can ameliorate the aforestated cloud-parameterization deficiencies. First, requiring an ascending cloud airmass to be saturated at its starting point will not only make the cloud instantly buoyant all through its ascent, but also provide the essential work function (buoyancy energy) that would promote more shallow clouds. Second, we argue that training clouds that are unstable to a finite vertical displacement, even if neutrally buoyant in their ambient environment, must continue to rise and entrain causing evaporation of in-cloud water. These concepts have not been invoked in any of the cloud

  6. Modes of mantle convection and the removal of heat from the earth's interior

    NASA Technical Reports Server (NTRS)

    Spohn, T.; Schubert, G.

    1982-01-01

    Thermal histories for two-layer and whole-mantle convection models are calculated and presented, based on a parameterization of convective heat transport. The model is composed of two concentric spherical shells surrounding a spherical core. The models were constrained to yield the observed present-day surface heat flow and mantle viscosity, in order to determine parameters. These parameters were varied to determine their effects on the results. Studies show that whole-mantle convection removes three times more primordial heat from the earth interior and six times more from the core than does two-layer convection (in 4.5 billion years). Mantle volumetric heat generation rates for both models are comparable to that of a potassium-depleted chondrite, and thus surface heat-flux balance does not require potassium in the core. Whole and two-layer mantle convection differences are primarily due to lower mantle thermal insulation and the lower heat removal efficiency of the upper mantle as compared with that of the whole mantle.

  7. CloudSat 2C-ICE product update with a new Ze parameterization in lidar-only region.

    PubMed

    Deng, Min; Mace, Gerald G; Wang, Zhien; Berry, Elizabeth

    2015-12-16

    The CloudSat 2C-ICE data product is derived from a synergetic ice cloud retrieval algorithm that takes as input a combination of CloudSat radar reflectivity ( Z e ) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation lidar attenuated backscatter profiles. The algorithm uses a variational method for retrieving profiles of visible extinction coefficient, ice water content, and ice particle effective radius in ice or mixed-phase clouds. Because of the nature of the measurements and to maintain consistency in the algorithm numerics, we choose to parameterize (with appropriately large specification of uncertainty) Z e and lidar attenuated backscatter in the regions of a cirrus layer where only the lidar provides data and where only the radar provides data, respectively. To improve the Z e parameterization in the lidar-only region, the relations among Z e , extinction, and temperature have been more thoroughly investigated using Atmospheric Radiation Measurement long-term millimeter cloud radar and Raman lidar measurements. This Z e parameterization provides a first-order estimation of Z e as a function extinction and temperature in the lidar-only regions of cirrus layers. The effects of this new parameterization have been evaluated for consistency using radiation closure methods where the radiative fluxes derived from retrieved cirrus profiles compare favorably with Clouds and the Earth's Radiant Energy System measurements. Results will be made publicly available for the entire CloudSat record (since 2006) in the most recent product release known as R05.

  8. Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization

    NASA Astrophysics Data System (ADS)

    Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.

    2018-01-01

    In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.

  9. A Heuristic Parameterization for the Integrated Vertical Overlap of Cumulus and Stratus

    NASA Astrophysics Data System (ADS)

    Park, Sungsu

    2017-10-01

    The author developed a heuristic parameterization to handle the contrasting vertical overlap structures of cumulus and stratus in an integrated way. The parameterization assumes that cumulus is maximum-randomly overlapped with adjacent cumulus; stratus is maximum-randomly overlapped with adjacent stratus; and radiation and precipitation areas at each model interface are grouped into four categories, that is, convective, stratiform, mixed, and clear areas. For simplicity, thermodynamic scalars within individual portions of cloud, radiation, and precipitation areas are assumed to be internally homogeneous. The parameterization was implemented into the Seoul National University Atmosphere Model version 0 (SAM0) in an offline mode and tested over the globe. The offline control simulation reasonably reproduces the online surface precipitation flux and longwave cloud radiative forcing (LWCF). Although the cumulus fraction is much smaller than the stratus fraction, cumulus dominantly contributes to precipitation production in the tropics. For radiation, however, stratus is dominant. Compared with the maximum overlap, the random overlap of stratus produces stronger LWCF and, surprisingly, more precipitation flux due to less evaporation of convective precipitation. Compared with the maximum overlap, the random overlap of cumulus simulates stronger LWCF and weaker precipitation flux. Compared with the control simulation with separate cumulus and stratus, the simulation with a single-merged cloud substantially enhances the LWCF in the tropical deep convection and midlatitude storm track regions. The process-splitting treatment of convective and stratiform precipitation with an independent precipitation approximation (IPA) simulates weaker surface precipitation flux than the control simulation in the tropical region.

  10. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    PubMed

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Parameterization of aerosol scavenging due to atmospheric ionization under varying relative humidity

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Tinsley, Brian A.

    2017-05-01

    Simulations and parameterizations of the modulation of aerosol scavenging by electric charges on particles and droplets for different relative humidities have been made for 3 μm radii droplets and a wide range of particle radii. For droplets and particles with opposite-sign charges, the attractive Coulomb force increases the collision rate coefficients above values due to other forces. With same-sign charges, the repulsive Coulomb force decreases the rate coefficients, and the short-range attractive image forces become important. The phoretic forces are attractive for relative humidity less than 100% and repulsive for relative humidity greater than 100% and have increasing overall effect for particle radii up to about 1 μm. There is an analytic solution for rate coefficients if only inverse square forces are present, but due to the presence of image forces, and for larger particles the intercept, weight, and the flow around the particle affecting the droplet trajectory, the simulated results usually depart far from the analytic solution. We give simple empirical parameterization formulas for some cases and more complex parameterizations for more exact fits to the simulated results. The results can be used in cloud models with growing droplets, as in updrafts, as well as with evaporating droplets in downdrafts. There is considered to be little scavenging of uncharged ice-forming nuclei in updrafts, but with charged ice-forming nuclei it is possible for scavenging in updrafts in cold clouds to produce contact ice nucleation. Scavenging in updrafts below the freezing level produces immersion nuclei that promote enhanced freezing as droplets rise above it.

  12. Applications of Quantum Theory of Atomic and Molecular Scattering to Problems in Hypersonic Flow

    NASA Technical Reports Server (NTRS)

    Malik, F. Bary

    1995-01-01

    The general status of a grant to investigate the applications of quantum theory in atomic and molecular scattering problems in hypersonic flow is summarized. Abstracts of five articles and eleven full-length articles published or submitted for publication are included as attachments. The following topics are addressed in these articles: fragmentation of heavy ions (HZE particles); parameterization of absorption cross sections; light ion transport; emission of light fragments as an indicator of equilibrated populations; quantum mechanical, optical model methods for calculating cross sections for particle fragmentation by hydrogen; evaluation of NUCFRG2, the semi-empirical nuclear fragmentation database; investigation of the single- and double-ionization of He by proton and anti-proton collisions; Bose-Einstein condensation of nuclei; and a liquid drop model in HZE particle fragmentation by hydrogen.

  13. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    EPA Science Inventory

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  14. Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Moroz, I.; Palmer, T.

    2015-12-01

    It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I

  15. The impact of lake and reservoir parameterization on global streamflow simulation.

    PubMed

    Zajac, Zuzanna; Revilla-Romero, Beatriz; Salamon, Peter; Burek, Peter; Hirpa, Feyera A; Beck, Hylke

    2017-05-01

    Lakes and reservoirs affect the timing and magnitude of streamflow, and are therefore essential hydrological model components, especially in the context of global flood forecasting. However, the parameterization of lake and reservoir routines on a global scale is subject to considerable uncertainty due to lack of information on lake hydrographic characteristics and reservoir operating rules. In this study we estimated the effect of lakes and reservoirs on global daily streamflow simulations of a spatially-distributed LISFLOOD hydrological model. We applied state-of-the-art global sensitivity and uncertainty analyses for selected catchments to examine the effect of uncertain lake and reservoir parameterization on model performance. Streamflow observations from 390 catchments around the globe and multiple performance measures were used to assess model performance. Results indicate a considerable geographical variability in the lake and reservoir effects on the streamflow simulation. Nash-Sutcliffe Efficiency (NSE) and Kling-Gupta Efficiency (KGE) metrics improved for 65% and 38% of catchments respectively, with median skill score values of 0.16 and 0.2 while scores deteriorated for 28% and 52% of the catchments, with median values -0.09 and -0.16, respectively. The effect of reservoirs on extreme high flows was substantial and widespread in the global domain, while the effect of lakes was spatially limited to a few catchments. As indicated by global sensitivity analysis, parameter uncertainty substantially affected uncertainty of model performance. Reservoir parameters often contributed to this uncertainty, although the effect varied widely among catchments. The effect of reservoir parameters on model performance diminished with distance downstream of reservoirs in favor of other parameters, notably groundwater-related parameters and channel Manning's roughness coefficient. This study underscores the importance of accounting for lakes and, especially, reservoirs and

  16. Developing a new parameterization framework for the heterogeneous ice nucleation of atmospheric aerosol particles

    NASA Astrophysics Data System (ADS)

    Ullrich, Romy; Hiranuma, Naruki; Hoose, Corinna; Möhler, Ottmar; Niemand, Monika; Steinke, Isabelle; Wagner, Robert

    2014-05-01

    Developing a new parameterization framework for the heterogeneous ice nucleation of atmospheric aerosol particles Ullrich, R., Hiranuma, N., Hoose, C., Möhler, O., Niemand, M., Steinke, I., Wagner, R. Aerosols of different nature induce microphysical processes of importance for the Earth's atmosphere. They affect not only directly the radiative budget, more importantly they essentially influence the formation and life cycles of clouds. Hence, aerosols and their ice nucleating ability are a fundamental input parameter for weather and climate models. During the previous years, the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber was used to extensively measure, under nearly realistic conditions, the ice nucleating properties of different aerosols. Numerous experiments were performed with a broad variety of aerosol types and under different freezing conditions. A reanalysis of these experiments offers the opportunity to develop a uniform parameterization framework of ice formation for many atmospherically relevant aerosols in a broad temperature and humidity range. The analysis includes both deposition nucleation and immersion freezing. The aim of this study is to develop this comprehensive parameterization for heterogeneous ice formation mainly by using the ice nucleation active site (INAS) approach. Niemand et al. (2012) already developed a temperature dependent parameterization for the INAS- density for immersion freezing on desert dust particles. In addition to a reanalysis of the ice nucleation behaviour of desert dust (Niemand et al. (2012)), volcanic ash (Steinke et al. (2010)) and organic particles (Wagner et al. (2010,2011)) this contribution will also show new results for the immersion freezing and deposition nucleation of soot aerosols. The next step will be the implementation of the parameterizations into the COSMO- ART model in order to test and demonstrate the usability of the framework. Hoose, C. and Möhler, O. (2012) Atmos

  17. Toward Improved Parameterization of a Meso-Scale Hydrologic Model in a Discontinuous Permafrost, Boreal Forest Ecosystem

    NASA Astrophysics Data System (ADS)

    Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.

    2013-12-01

    The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on

  18. Development of a water-jet assisted laser paint removal process

    NASA Astrophysics Data System (ADS)

    Madhukar, Yuvraj K.; Mullick, Suvradip; Nath, Ashish K.

    2013-12-01

    The laser paint removal process usually leaves behind traces of combustion product i.e. ashes on the surface. An additional post-processing such as light-brushing or wiping by some mechanical means is required to remove the residual ash. In order to strip out the paint completely from the surface in a single step, a water-jet assisted laser paint removal process has been investigated. The 1.07 μm wavelength of Yb-fiber laser radiation has low absorption in water; therefore a high power fiber laser was used in the experiment. The laser beam was delivered on the paint-surface along with a water jet to remove the paint and residual ashes effectively. The specific energy, defined as the laser energy required removing a unit volume of paint was found to be marginally more than that for the gas-jet assisted laser paint removal process. However, complete paint removal was achieved with the water-jet assist only. The relatively higher specific energy in case of water-jet assist is mainly due to the scattering of laser beam in the turbulent flow of water-jet.

  19. Impact of Snow Grain Shape and Internal Mixing with Black Carbon Aerosol on Snow Optical Properties for use in Climate Models

    NASA Astrophysics Data System (ADS)

    He, C.; Liou, K. N.; Takano, Y.; Yang, P.; Li, Q.; Chen, F.

    2017-12-01

    A set of parameterizations is developed for spectral single-scattering properties of clean and black carbon (BC)-contaminated snow based on geometric-optic surface-wave (GOS) computations, which explicitly resolves BC-snow internal mixing and various snow grain shapes. GOS calculations show that, compared with nonspherical grains, volume-equivalent snow spheres show up to 20% larger asymmetry factors and hence stronger forward scattering, particularly at wavelengths <1 mm. In contrast, snow grain sizes have a rather small impact on the asymmetry factor at wavelengths <1 mm, whereas size effects are important at longer wavelengths. The snow asymmetry factor is parameterized as a function of effective size, aspect ratio, and shape factor, and shows excellent agreement with GOS calculations. According to GOS calculations, the single-scattering coalbedo of pure snow is predominantly affected by grain sizes, rather than grain shapes, with higher values for larger grains. The snow single-scattering coalbedo is parameterized in terms of the effective size that combines shape and size effects, with an accuracy of >99%. Based on GOS calculations, BC-snow internal mixing enhances the snow single-scattering coalbedo at wavelengths <1 mm, but it does not alter the snow asymmetry factor. The BC-induced enhancement ratio of snow single-scattering coalbedo, independent of snow grain size and shape, is parameterized as a function of BC concentration with an accuracy of >99%. Overall, in addition to snow grain size, both BC-snow internal mixing and snow grain shape play critical roles in quantifying BC effects on snow optical properties. The present parameterizations can be conveniently applied to snow, land surface, and climate models including snowpack radiative transfer processes.

  20. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  1. The Parameterization of PBL height with Helicity and preliminary Application in Tropical Cyclone Prediction

    NASA Astrophysics Data System (ADS)

    Ma, Leiming

    2015-04-01

    Planetary Boundary Layer (PBL) plays an important role in transferring the energy and moisture from ocean to tropical cyclone (TC). Thus, the accuracy of PBL parameterization determines the performance of numerical model on TC prediction to a large extent. Among various components of PBL parameterization, the definition on the height of PBL is the first should be concerned, which determines the vertical scale of PBL and the associated processes of turbulence in different scales. However, up to now, there is lacked consensus on how to define the height of PBL in the TC research community. The PBL heights represented by current numerical models usually exhibits significant difference with TC observation (e.g., Zhang et al., 2011; Storm et al., 2008), leading to the rapid growth of error in TC prediction. In an effort to narrow the gap between PBL parameterization and reality, this study presents a new parameterization scheme for the definition of PBL height. Instead of using traditional definition for PBL height with Richardson number, which has been verified not appropriate for the strongly sheared structure of TC PBL in recent observation studies, the new scheme employs a dynamical definition based on the conception of helicity. In this sense the spiral structures associated with inflow layer and rolls are expected to be represented in PBL parameterization. By defining the PBL height at each grid point, the new scheme also avoids to assume the symmetric inflow layer that is usually implemented in observational studies. The new scheme is applied to the Yonsei University (YSU) scheme in the Weather Research and Forecasting (WRF) model of US National Center for Atmospheric Research (NCAR) and verified with numerical experiments on TC Morakot (2009), which brought torrential rainfall and disaster to Taiwan and China mainland during landfall. The Morakot case is selected in this study to examine the performance of the new scheme in representing various structures of PBL

  2. Comparison of different objective functions for parameterization of simple respiration models

    Treesearch

    M.T. van Wijk; B. van Putten; D.Y. Hollinger; A.D. Richardson

    2008-01-01

    The eddy covariance measurements of carbon dioxide fluxes collected around the world offer a rich source for detailed data analysis. Simple, aggregated models are attractive tools for gap filling, budget calculation, and upscaling in space and time. Key in the application of these models is their parameterization and a robust estimate of the uncertainty and reliability...

  3. Dynamically Consistent Parameterization of Mesoscale Eddies This work aims at parameterization of eddy effects for use in non-eddy-resolving ocean models and focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones.

    NASA Astrophysics Data System (ADS)

    Berloff, P. S.

    2016-12-01

    This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.

  4. Data-driven parameterization of the generalized Langevin equation

    DOE PAGES

    Lei, Huan; Baker, Nathan A.; Li, Xiantao

    2016-11-29

    We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.

  5. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE PAGES

    Wang, Yong; Zhang, Guang J.

    2016-09-29

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  6. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yong; Zhang, Guang J.

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  7. Intercomparison of Martian Lower Atmosphere Simulated Using Different Planetary Boundary Layer Parameterization Schemes

    NASA Technical Reports Server (NTRS)

    Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.

    2015-01-01

    We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.

  8. A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation

    EPA Science Inventory

    Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...

  9. Uncertainty in Modeling Dust Mass Balance and Radiative Forcing from Size Parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chun; Chen, Siyu; Leung, Lai-Yung R.

    2013-11-05

    This study examines the uncertainties in simulating mass balance and radiative forcing of mineral dust due to biases in the aerosol size parameterization. Simulations are conducted quasi-globally (180oW-180oE and 60oS-70oN) using the WRF24 Chem model with three different approaches to represent aerosol size distribution (8-bin, 4-bin, and 3-mode). The biases in the 3-mode or 4-bin approaches against a relatively more accurate 8-bin approach in simulating dust mass balance and radiative forcing are identified. Compared to the 8-bin approach, the 4-bin approach simulates similar but coarser size distributions of dust particles in the atmosphere, while the 3-mode pproach retains more finemore » dust particles but fewer coarse dust particles due to its prescribed og of each mode. Although the 3-mode approach yields up to 10 days longer dust mass lifetime over the remote oceanic regions than the 8-bin approach, the three size approaches produce similar dust mass lifetime (3.2 days to 3.5 days) on quasi-global average, reflecting that the global dust mass lifetime is mainly determined by the dust mass lifetime near the dust source regions. With the same global dust emission (~6000 Tg yr-1), the 8-bin approach produces a dust mass loading of 39 Tg, while the 4-bin and 3-mode approaches produce 3% (40.2 Tg) and 25% (49.1 Tg) higher dust mass loading, respectively. The difference in dust mass loading between the 8-bin approach and the 4-bin or 3-mode approaches has large spatial variations, with generally smaller relative difference (<10%) near the surface over the dust source regions. The three size approaches also result in significantly different dry and wet deposition fluxes and number concentrations of dust. The difference in dust aerosol optical depth (AOD) (a factor of 3) among the three size approaches is much larger than their difference (25%) in dust mass loading. Compared to the 8-bin approach, the 4-bin approach yields stronger dust absorptivity, while the 3

  10. Self-calibration of photometric redshift scatter in weak-lensing surveys

    DOE PAGES

    Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary

    2010-06-11

    Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less

  11. Potential and costs of carbon dioxide removal by enhanced weathering of rocks

    NASA Astrophysics Data System (ADS)

    Strefler, Jessica; Amann, Thorben; Bauer, Nico; Kriegler, Elmar; Hartmann, Jens

    2018-03-01

    The chemical weathering of rocks currently absorbs about 1.1 Gt CO2 a-1 being mainly stored as bicarbonate in the ocean. An enhancement of this slow natural process could remove substantial amounts of CO2 from the atmosphere, aiming to offset some unavoidable anthropogenic emissions in order to comply with the Paris Agreement, while at the same time it may decrease ocean acidification. We provide the first comprehensive assessment of economic costs, energy requirements, technical parameterization, and global and regional carbon removal potential. The crucial parameters defining this potential are the grain size and weathering rates. The main uncertainties about the potential relate to weathering rates and rock mass that can be integrated into the soil. The discussed results do not specifically address the enhancement of weathering through microbial processes, feedback of geogenic nutrient release, and bioturbation. We do not only assess dunite rock, predominantly bearing olivine (in the form of forsterite) as the mineral that has been previously proposed to be best suited for carbon removal, but focus also on basaltic rock to minimize potential negative side effects. Our results show that enhanced weathering is an option for carbon dioxide removal that could be competitive already at 60 US  t-1 CO2 removed for dunite, but only at 200 US  t-1 CO2 removed for basalt. The potential carbon removal on cropland areas could be as large as 95 Gt CO2 a-1 for dunite and 4.9 Gt CO2 a-1 for basalt. The best suited locations are warm and humid areas, particularly in India, Brazil, South-East Asia and China, where almost 75% of the global potential can be realized. This work presents a techno-economic assessment framework, which also allows for the incorporation of further processes.

  12. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  13. Radar and microphysical characteristics of convective storms simulated from a numerical model using a new microphysical parameterization

    NASA Technical Reports Server (NTRS)

    Ferrier, Brad S.; Tao, Wei-Kuo; Simpson, Joanne

    1991-01-01

    The basic features of a new and improved bulk-microphysical parameterization capable of simulating the hydrometeor structure of convective systems in all types of large-scale environments (with minimal adjustment of coefficients) are studied. Reflectivities simulated from the model are compared with radar observations of an intense midlatitude convective system. Simulated reflectivities using the novel four-class ice scheme with a microphysical parameterization rain distribution at 105 min are illustrated. Preliminary results indicate that this new ice scheme works efficiently in simulating midlatitude continental storms.

  14. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  15. Rapid parameterization of small molecules using the Force Field Toolkit.

    PubMed

    Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C

    2013-12-15

    The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.

  16. Circuit Design Optimization Using Genetic Algorithm with Parameterized Uniform Crossover

    NASA Astrophysics Data System (ADS)

    Bao, Zhiguo; Watanabe, Takahiro

    Evolvable hardware (EHW) is a new research field about the use of Evolutionary Algorithms (EAs) to construct electronic systems. EHW refers in a narrow sense to use evolutionary mechanisms as the algorithmic drivers for system design, while in a general sense to the capability of the hardware system to develop and to improve itself. Genetic Algorithm (GA) is one of typical EAs. We propose optimal circuit design by using GA with parameterized uniform crossover (GApuc) and with fitness function composed of circuit complexity, power, and signal delay. Parameterized uniform crossover is much more likely to distribute its disruptive trials in an unbiased manner over larger portions of the space, then it has more exploratory power than one and two-point crossover, so we have more chances of finding better solutions. Its effectiveness is shown by experiments. From the results, we can see that the best elite fitness, the average value of fitness of the correct circuits and the number of the correct circuits of GApuc are better than that of GA with one-point crossover or two-point crossover. The best case of optimal circuits generated by GApuc is 10.18% and 6.08% better in evaluating value than that by GA with one-point crossover and two-point crossover, respectively.

  17. A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme

    EPA Science Inventory

    Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...

  18. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2011-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn

  19. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2012-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn

  20. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    NASA Astrophysics Data System (ADS)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  1. Integrated Raman and angular scattering of single biological cells

    NASA Astrophysics Data System (ADS)

    Smith, Zachary J.

    2009-12-01

    Raman, or inelastic, scattering and angle-resolved elastic scattering are two optical processes that have found wide use in the study of biological systems. Raman scattering quantitatively reports on the chemical composition of a sample by probing molecular vibrations, while elastic scattering reports on the morphology of a sample by detecting structure-induced coherent interference between incident and scattered light. We present the construction of a multimodal microscope platform capable of gathering both elastically and inelastically scattered light from a 38 mum2 region in both epi- and trans-illumination geometries. Simultaneous monitoring of elastic and inelastic scattering from a microscopic region allows noninvasive characterization of a living sample without the need for exogenous dyes or labels. A sample is illuminated either from above or below with a focused 785 nm TEM00 mode laser beam, with elastic and inelastic scattering collected by two separate measurement arms. The measurements may be made either simultaneously, if identical illumination geometries are used, or sequentially, if the two modalities utilize opposing illumination paths. In the inelastic arm, Stokes-shifted light is dispersed by a spectrograph onto a CCD array. In the elastic scattering collection arm, a relay system images the microscope's back aperture onto a CCD detector array to yield an angle-resolved elastic scattering pattern. Post-processing of the inelastic scattering to remove fluorescence signals yields high quality Raman spectra that report on the sample's chemical makeup. Comparison of the elastically scattered pupil images to generalized Lorenz-Mie theory yields estimated size distributions of scatterers within the sample. In this thesis we will present validations of the IRAM instrument through measurements performed on single beads of a few microns in size, as well as on ensembles of sub-micron particles of known size distributions. The benefits and drawbacks of the

  2. Impact of climate seasonality on catchment yield: A parameterization for commonly-used water balance formulas

    NASA Astrophysics Data System (ADS)

    de Lavenne, Alban; Andréassian, Vazken

    2018-03-01

    This paper examines the hydrological impact of the seasonality of precipitation and maximum evaporation: seasonality is, after aridity, a second-order determinant of catchment water yield. Based on a data set of 171 French catchments (where aridity ranged between 0.2 and 1.2), we present a parameterization of three commonly-used water balance formulas (namely, Turc-Mezentsev, Tixeront-Fu and Oldekop formulas) to account for seasonality effects. We quantify the improvement of seasonality-based parameterization in terms of the reconstitution of both catchment streamflow and water yield. The significant improvement obtained (reduction of RMSE between 9 and 14% depending on the formula) demonstrates the importance of climate seasonality in the determination of long-term catchment water balance.

  3. Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hansen, S. E.; Papadopoulos, G. A.

    2017-12-01

    The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.

  4. Neutron scattering investigations of frustated magnets

    NASA Astrophysics Data System (ADS)

    Fennell, Tom

    This thesis describes the experimental investigation of frustrated magnetic systems based on the pyrochlore lattice of corner-sharing tetrahedra. Ho2Ti207 and Dy2Ti207 are examples of spin ices, in which the manifold of disordered magnetic groundstates maps onto that of the proton positions in ice. Using single crystal neutron scattering to measure Bragg and diffuse scattering, the effect of applying magnetic fields along different directions in the crystal was investigated. Different schemes of degeneracy removal were observed for different directions. Long and short range order, and the coexistence of both could be observed by this technique.The field and temperature dependence of magnetic ordering was studied in Ho2Ti207 and Dy2Ti207. Ho2Ti2()7 has been more extensively investigated. The field was applied on [00l], [hh0], [hhh] and [hh2h]. Dy2Ti207 was studied with the field applied on [00l] and [hho] but more detailed information about the evolution of the scattering pattern across a large area of reciprocal space was obtained.With the field applied on [00l] both materials showed complete degeneracy removal. A long range ordered structure was formed. Any magnetic diffuse scattering vanished and was entirely replaced by strong magnetic Bragg scattering. At T =0.05 K both materials show unusual magnetization curves, with a prominent step and hysteresis. This was attributed to the extremely slow dynamics of spin ice materials at this temperature.Both materials were studied in greatest detail with the field applied on [hh0]. The coexistence of long and short range order was observed when the field was raised at T = 0.05 K. The application of a field in this direction separated the spin system into two populations. One could be ordered by the field, and one remained disordered. However, via spin-spin interactions, the field restricted the degeneracy of the disordered spin population. The neutron scattering pattern of Dy2Ti207 shows that the spin system was separated

  5. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  6. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  7. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-05-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of

  8. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-01-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present

  9. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3

  10. Parameterization and Observability Analysis of Scalable Battery Clusters for Onboard Thermal Management

    DTIC Science & Technology

    2011-12-01

    the designed parameterization scheme and adaptive observer. A cylindri- cal battery thermal model in Eq. (1) with parameters of an A123 32157 LiFePO4 ...Morcrette, M. and Delacourt, C. (2010) Thermal modeling of a cylindrical LiFePO4 /graphite lithium-ion battery. Journal of Power Sources. 195, 2961

  11. Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma

    PubMed Central

    Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan

    2014-01-01

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470

  12. Harmonic Motion Detection in a Vibrating Scattering Medium

    PubMed Central

    Urban, Matthew W.; Chen, Shigao; Greenleaf, James F.

    2008-01-01

    Elasticity imaging is an emerging medical imaging modality that seeks to map the spatial distribution of tissue stiffness. Ultrasound radiation force excitation and motion tracking using pulse-echo ultrasound have been used in numerous methods. Dynamic radiation force is used in vibrometry to cause an object or tissue to vibrate, and the vibration amplitude and phase can be measured with exceptional accuracy. This paper presents a model that simulates harmonic motion detection in a vibrating scattering medium incorporating 3-D beam shapes for radiation force excitation and motion tracking. A parameterized analysis using this model provides a platform to optimize motion detection for vibrometry applications in tissue. An experimental method that produces a multifrequency radiation force is also presented. Experimental harmonic motion detection of simultaneous multifrequency vibration is demonstrated using a single transducer. This method can accurately detect motion with displacement amplitude as low as 100 to 200 nm in bovine muscle. Vibration phase can be measured within 10° or less. The experimental results validate the conclusions observed from the model and show multifrequency vibration induction and measurements can be performed simultaneously. PMID:18986892

  13. Fast-response and scattering-free polymer network liquid crystals for infrared light modulators

    NASA Astrophysics Data System (ADS)

    Fan, Yun-Hsing; Lin, Yi-Hsin; Ren, Hongwen; Gauza, Sebastian; Wu, Shin-Tson

    2004-02-01

    A fast-response and scattering-free homogeneously aligned polymer network liquid crystal (PNLC) light modulator is demonstrated at λ=1.55 μm wavelength. Light scattering in the near-infrared region is suppressed by optimizing the polymer concentration such that the network domain sizes are smaller than the wavelength. The strong polymer network anchoring assists LC to relax back quickly as the electric field is removed. As a result, the PNLC response time is ˜250× faster than that of the E44 LC mixture except that the threshold voltage is increased by ˜25×.

  14. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  15. On parameterization of the inverse problem for estimating aquifer properties using tracer data

    NASA Astrophysics Data System (ADS)

    Kowalsky, M. B.; Finsterle, S.; Williams, K. H.; Murray, C.; Commer, M.; Newcomer, D.; Englert, A.; Steefel, C. I.; Hubbard, S. S.

    2012-06-01

    In developing a reliable approach for inferring hydrological properties through inverse modeling of tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance, as errors in the model structure are partly compensated for by estimating biased property values during the inversion. These biased estimates, while potentially providing an improved fit to the calibration data, may lead to wrong interpretations and conclusions and reduce the ability of the model to make reliable predictions. We consider the estimation of spatial variations in permeability and several other parameters through inverse modeling of tracer data, specifically synthetic and actual field data associated with the 2007 Winchester experiment from the Department of Energy Rifle site. Characterization is challenging due to the real-world complexities associated with field experiments in such a dynamic groundwater system. Our aim is to highlight and quantify the impact on inversion results of various decisions related to parameterization, such as the positioning of pilot points in a geostatistical parameterization; the handling of up-gradient regions; the inclusion of zonal information derived from geophysical data or core logs; extension from 2-D to 3-D; assumptions regarding the gradient direction, porosity, and the semivariogram function; and deteriorating experimental conditions. This work adds to the relatively limited number of studies that offer guidance on the use of pilot points in complex real-world experiments involving tracer data (as opposed to hydraulic head data).

  16. Sensitivity of Coupled Tropical Pacific Model Biases to Convective Parameterization in CESM1

    NASA Astrophysics Data System (ADS)

    Woelfle, M. D.; Yu, S.; Bretherton, C. S.; Pritchard, M. S.

    2018-01-01

    Six month coupled hindcasts show the central equatorial Pacific cold tongue bias development in a GCM to be sensitive to the atmospheric convective parameterization employed. Simulations using the standard configuration of the Community Earth System Model version 1 (CESM1) develop a cold bias in equatorial Pacific sea surface temperatures (SSTs) within the first two months of integration due to anomalous ocean advection driven by overly strong easterly surface wind stress along the equator. Disabling the deep convection parameterization enhances the zonal pressure gradient leading to stronger zonal wind stress and a stronger equatorial SST bias, highlighting the role of pressure gradients in determining the strength of the cold bias. Superparameterized hindcasts show reduced SST bias in the cold tongue region due to a reduction in surface easterlies despite simulating an excessively strong low-level jet at 1-1.5 km elevation. This reflects inadequate vertical mixing of zonal momentum from the absence of convective momentum transport in the superparameterized model. Standard CESM1simulations modified to omit shallow convective momentum transport reproduce the superparameterized low-level wind bias and associated equatorial SST pattern. Further superparameterized simulations using a three-dimensional cloud resolving model capable of producing realistic momentum transport simulate a cold tongue similar to the default CESM1. These findings imply convective momentum fluxes may be an underappreciated mechanism for controlling the strength of the equatorial cold tongue. Despite the sensitivity of equatorial SST to these changes in convective parameterization, the east Pacific double-Intertropical Convergence Zone rainfall bias persists in all simulations presented in this study.

  17. The sensitivity of WRF daily summertime simulations over West Africa to alternative parameterizations. Part 2: Precipitation.

    PubMed

    Noble, Erik; Druyan, Leonard M; Fulakeza, Matthew

    2016-01-01

    This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000-2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35-0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000-2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation.

  18. The sensitivity of WRF daily summertime simulations over West Africa to alternative parameterizations. Part 2: Precipitation

    PubMed Central

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2018-01-01

    This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000–2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35–0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000–2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation. PMID:29563651

  19. Computing the Edge-Neighbour-Scattering Number of Graphs

    NASA Astrophysics Data System (ADS)

    Wei, Zongtian; Qi, Nannan; Yue, Xiaokui

    2013-11-01

    A set of edges X is subverted from a graph G by removing the closed neighbourhood N[X] from G. We denote the survival subgraph by G=X. An edge-subversion strategy X is called an edge-cut strategy of G if G=X is disconnected, a single vertex, or empty. The edge-neighbour-scattering number of a graph G is defined as ENS(G) = max{ω(G/X)-|X| : X is an edge-cut strategy of G}, where w(G=X) is the number of components of G=X. This parameter can be used to measure the vulnerability of networks when some edges are failed, especially spy networks and virus-infected networks. In this paper, we prove that the problem of computing the edge-neighbour-scattering number of a graph is NP-complete and give some upper and lower bounds for this parameter.

  20. An improvement of quantum parametric methods by using SGSA parameterization technique and new elementary parametric functionals

    NASA Astrophysics Data System (ADS)

    Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.

    A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.

  1. Modelling and parameterizing the influence of tides on ice-shelf melt rates

    NASA Astrophysics Data System (ADS)

    Jourdain, N.; Molines, J. M.; Le Sommer, J.; Mathiot, P.; de Lavergne, C.; Gurvan, M.; Durand, G.

    2017-12-01

    Significant Antarctic ice sheet thinning is observed in several sectors of Antarctica, in particular in the Amundsen Sea sector, where warm circumpolar deep waters affect basal melting. The later has the potential to trigger marine ice sheet instabilities, with an associated potential for rapid sea level rise. It is therefore crucial to simulate and understand the processes associated with ice-shelf melt rates. In particular, the absence of tides representation in ocean models remains a caveat of numerous ocean hindcasts and climate projections. In the Amundsen Sea, tides are relatively weak and the melt-induced circulation is stronger than the tidal circulation. Using a regional 1/12° ocean model of the Amundsen Sea, we nonetheless find that tides can increase melt rates by up to 36% in some ice-shelf cavities. Among the processes that can possibly affect melt rates, the most important is an increased exchange at the ice/ocean interface resulting from the presence of strong tidal currents along the ice drafts. Approximately a third of this effect is compensated by a decrease in thermal forcing along the ice draft, which is related to an enhanced vertical mixing in the ocean interior in presence of tides. Parameterizing the effect of tides is an alternative to the representation of explicit tides in an ocean model, and has the advantage not to require any filtering of ocean model outputs. We therefore explore different ways to parameterize the effects of tides on ice shelf melt. First, we compare several methods to impose tidal velocities along the ice draft. We show that getting a realistic spatial distribution of tidal velocities in important, and can be deduced from the barotropic velocities of a tide model. Then, we explore several aspects of parameterized tidal mixing to reproduce the tide-induced decrease in thermal forcing along the ice drafts.

  2. The removal kinetics of dissolved organic matter and the optical clarity of groundwater

    USGS Publications Warehouse

    Chapelle, Francis H.; Shen, Yuan; Strom, Eric W.; Benner, Ronald

    2016-01-01

    Concentrations of dissolved organic matter (DOM) and ultraviolet/visible light absorbance decrease systematically as groundwater moves through the unsaturated zones overlying aquifers and along flowpaths within aquifers. These changes occur over distances of tens of meters (m) implying rapid removal kinetics of the chromophoric DOM that imparts color to groundwater. A one-compartment input-output model was used to derive a differential equation describing the removal of DOM from the dissolved phase due to the combined effects of biodegradation and sorption. The general solution to the equation was parameterized using a 2-year record of dissolved organic carbon (DOC) concentration changes in groundwater at a long-term observation well. Estimated rates of DOC loss were rapid and ranged from 0.093 to 0.21 micromoles per liter per day (μM d−1), and rate constants for DOC removal ranged from 0.0021 to 0.011 per day (d−1). Applying these removal rate constants to an advective-dispersion model illustrates substantial depletion of DOC over flow-path distances of 200 m or less and in timeframes of 2 years or less. These results explain the low to moderate DOC concentrations (20–75 μM; 0.26–1 mg L−1) and ultraviolet absorption coefficient values (a254 < 5 m−1) observed in groundwater produced from 59 wells tapping eight different aquifer systems of the United States. The nearly uniform optical clarity of groundwater, therefore, results from similarly rapid DOM-removal kinetics exhibited by geologically and hydrologically dissimilar aquifers.

  3. Evolution of the transfer function characterization of surface scatter phenomena

    NASA Astrophysics Data System (ADS)

    Harvey, James E.; Pfisterer, Richard N.

    2016-09-01

    Based upon the empirical observation that BRDF measurements of smooth optical surfaces exhibited shift-invariant behavior when plotted versus    o , the original Harvey-Shack (OHS) surface scatter theory was developed as a scalar linear systems formulation in which scattered light behavior was characterized by a surface transfer function (STF) reminiscent of the optical transfer function (OTF) of modern image formation theory (1976). This shift-invariant behavior combined with the inverse power law behavior when plotting log BRDF versus log   o was quickly incorporated into several optical analysis software packages. Although there was no explicit smooth-surface approximation in the OHS theory, there was a limitation on both the incident and scattering angles. In 1988 the modified Harvey-Shack (MHS) theory removed the limitation on the angle of incidence; however, a moderate-angle scattering limitation remained. Clearly for large incident angles the BRDF was no longer shift-invariant as a different STF was now required for each incident angle. In 2011 the generalized Harvey-Shack (GHS) surface scatter theory, characterized by a two-parameter family of STFs, evolved into a practical modeling tool to calculate BRDFs from optical surface metrology data for situations that violate the smooth surface approximation inherent in the Rayleigh-Rice theory and/or the moderate-angle limitation of the Beckmann-Kirchhoff theory. And finally, the STF can be multiplied by the classical OTF to provide a complete linear systems formulation of image quality as degraded by diffraction, geometrical aberrations and surface scatter effects from residual optical fabrication errors.

  4. The removal kinetics of dissolved organic matter and the optical clarity of groundwater

    NASA Astrophysics Data System (ADS)

    Chapelle, Francis H.; Shen, Yuan; Strom, Eric W.; Benner, Ronald

    2016-09-01

    Concentrations of dissolved organic matter (DOM) and ultraviolet/visible light absorbance decrease systematically as groundwater moves through the unsaturated zones overlying aquifers and along flowpaths within aquifers. These changes occur over distances of tens of meters (m) implying rapid removal kinetics of the chromophoric DOM that imparts color to groundwater. A one-compartment input-output model was used to derive a differential equation describing the removal of DOM from the dissolved phase due to the combined effects of biodegradation and sorption. The general solution to the equation was parameterized using a 2-year record of dissolved organic carbon (DOC) concentration changes in groundwater at a long-term observation well. Estimated rates of DOC loss were rapid and ranged from 0.093 to 0.21 micromoles per liter per day (μM d-1), and rate constants for DOC removal ranged from 0.0021 to 0.011 per day (d-1). Applying these removal rate constants to an advective-dispersion model illustrates substantial depletion of DOC over flow-path distances of 200 m or less and in timeframes of 2 years or less. These results explain the low to moderate DOC concentrations (20-75 μM; 0.26-1 mg L-1) and ultraviolet absorption coefficient values ( a 254 < 5 m-1) observed in groundwater produced from 59 wells tapping eight different aquifer systems of the United States. The nearly uniform optical clarity of groundwater, therefore, results from similarly rapid DOM-removal kinetics exhibited by geologically and hydrologically dissimilar aquifers.

  5. Radius anomaly in the diffraction model for heavy-ion elastic scattering

    NASA Astrophysics Data System (ADS)

    Pandey, L. N.; Mukherjee, S. N.

    1984-04-01

    The elastic scattering of heavy ions, 20Ne on 208Pb, 20Ne on 235U, 84Kr on 208Pb, and 84Kr on 232Th, is examined within the framework of Frahn's diffraction model. An analysis of the experiment using the "quarter point recipe" of the expected Fresnel cross sections yields a larger radius for 208Pb than the radii for 235U and 232Th. It is shown that inclusion of the nuclear deformation in the model removes the above anomaly in the radii, and the assumption of smooth cutoff of the angular momentum simultaneously leads to a better fit to elastic scattering data, compared to those obtained by the earlier workers on the assumption of sharp cutoff. [NUCLEAR REACTIONS Elastic scattering, 20Ne+208Pb (161.2 MeV), 20Ne+235U (175 MeV), 84Kr+208Pb (500 MeV), 84Kr+232Th (500 MeV), diffraction model, nuclear deformation.

  6. Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.

    2016-09-01

    Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.

  7. Quantum Wronskian approach to six-point gluon scattering amplitudes at strong coupling

    NASA Astrophysics Data System (ADS)

    Hatsuda, Yasuyuki; Ito, Katsushi; Satoh, Yuji; Suzuki, Junji

    2014-08-01

    We study the six-point gluon scattering amplitudes in = 4 super Yang-Mills theory at strong coupling based on the twisted ℤ4-symmetric integrable model. The lattice regularization allows us to derive the associated thermodynamic Bethe ansatz (TBA) equations as well as the functional relations among the Q-/T-/Y-functions. The quantum Wronskian relation for the Q-/T-functions plays an important role in determining a series of the expansion coefficients of the T-/Y-functions around the UV limit, including the dependence on the twist parameter. Studying the CFT limit of the TBA equations, we derive the leading analytic expansion of the remainder function for the general kinematics around the limit where the dual Wilson loops become regular-polygonal. We also compare the rescaled remainder functions at strong coupling with those at two, three and four loops, and find that they are close to each other along the trajectories parameterized by the scale parameter of the integrable model.

  8. A method for coupling a parameterization of the planetary boundary layer with a hydrologic model

    NASA Technical Reports Server (NTRS)

    Lin, J. D.; Sun, Shu Fen

    1986-01-01

    Deardorff's parameterization of the planetary boundary layer is adapted to drive a hydrologic model. The method converts the atmospheric conditions measured at the anemometer height at one site to the mean values in the planetary boundary layer; it then uses the planetary boundary layer parameterization and the hydrologic variables to calculate the fluxes of momentum, heat and moisture at the atmosphere-land interface for a different site. A simplified hydrologic model is used for a simulation study of soil moisture and ground temperature on three different land surface covers. The results indicate that this method can be used to drive a spatially distributed hydrologic model by using observed data available at a meteorological station located on or nearby the site.

  9. Malachite green "a cationic dye" and its removal from aqueous solution by adsorption

    NASA Astrophysics Data System (ADS)

    Raval, Nirav P.; Shah, Prapti U.; Shah, Nisha K.

    2017-11-01

    Adsorption can be efficiently employed for the removal of various toxic dyes from water and wastewater. In this article, the authors reviewed variety of adsorbents used by various researchers for the removal of malachite green (MG) dye from an aqueous environment. The main motto of this review article was to assemble the scattered available information of adsorbents used for the removal of MG to enlighten their wide potential. In addition to this, various optimal experimental conditions (solution pH, equilibrium contact time, amount of adsorbent and temperature) as well as adsorption isotherms, kinetics and thermodynamics data of different adsorbents towards MG were also analyzed and tabulated. Finally, it was concluded that the agricultural solid wastes and biosorbents such as biopolymers and biomass adsorbents have demonstrated outstanding adsorption capabilities for removal of MG dye.

  10. Comparative study of transient hydraulic tomography with varying parameterizations and zonations: Laboratory sandbox investigation

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.

    2017-11-01

    Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.

  11. Cone and seed traits of two Juniperus species influence roles of frugivores and scatter-hoarding rodents as seed dispersal agents

    NASA Astrophysics Data System (ADS)

    Dimitri, Lindsay A.; Longland, William S.; Vander Wall, Stephen B.

    2017-11-01

    Seed dispersal in Juniperus is generally attributed to frugivores that consume the berry-like female cones. Some juniper cones are fleshy and resinous such as those of western juniper (Juniperus occidentalis), while others are dry and leathery such as those of Utah juniper (J. osteosperma). Rodents have been recorded harvesting Juniperus seeds and cones but are mostly considered seed predators. Our study sought to determine if rodents play a role in dispersal of western and Utah juniper seeds. We documented rodent harvest of cones and seeds of the locally-occurring juniper species and the alternate (non-local) juniper species in removal experiments at a western juniper site in northeastern California and a Utah juniper site in western Nevada. Characteristics of western and Utah juniper cones appeared to influence removal, as cones from the local juniper species were preferred at both sites. Conversely, removal of local and non-local seeds was similar. Piñon mice (Peromyscus truei) were responsible for most removal of cones and seeds at both sites. We used radioactively labeled seeds to follow seed fate and found many of these seeds in scattered caches (western juniper: 415 seeds in 82 caches, 63.0% of seeds found; Utah juniper: 458 seeds in 127 caches, 39.5% of seeds found) most of which were attributed to piñon mice. We found little evidence of frugivores dispersing Utah juniper seeds, thus scatter-hoarding rodents appear to be the main dispersal agents. Western juniper cones were eaten by frugivores, and scatter-hoarding is a complimentary or secondary form of seed dispersal. Our results support the notion that Utah juniper has adapted to xeric environments by conserving water through the loss of fleshy fruits that attract frugivores and instead relies on scatter-hoarding rodents as effective dispersal agents.

  12. WE-EF-207-10: Striped Ratio Grids: A New Concept for Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, S

    2015-06-15

    Purpose: To propose a new method for estimating scatter in x-ray imaging. We propose the “striped ratio grid,” an anti-scatter grid with alternating stripes of high scatter rejection (attained, for example, by high grid ratio) and low scatter rejection. To minimize artifacts, stripes are oriented parallel to the direction of the ramp filter. Signal discontinuities at the boundaries between stripes provide information on local scatter content, although these discontinuities are contaminated by variation in primary radiation. Methods: We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid, andmore » processed them together to mimic a striped ratio grid. Two phantoms were scanned with the emulated striped ratio grid and compared with a conventional anti-scatter grid and a fan-beam acquisition, which served as ground truth. A nonlinear image processing algorithm was developed to mitigate the problem of primary variation. Results: The emulated striped ratio grid reduced scatter more effectively than the conventional grid alone. Contrast is thereby improved in projection imaging. In CT imaging, cupping is markedly reduced. Artifacts introduced by the striped ratio grid appear to be minimal. Conclusion: Striped ratio grids could be a simple and effective evolution of conventional anti-scatter grids. Unlike several other approaches currently under investigation for scatter management, striped ratio grids require minimal computation, little new hardware (at least for systems which already use removable grids) and impose few assumptions on the nature of the object being scanned.« less

  13. Wavelet signatures of K-splitting of the Isoscalar Giant Quadrupole Resonance in deformed nuclei from high-resolution (p,p‧) scattering off 146, 148, 150Nd

    NASA Astrophysics Data System (ADS)

    Kureba, C. O.; Buthelezi, Z.; Carter, J.; Cooper, G. R. J.; Fearick, R. W.; Förtsch, S. V.; Jingo, M.; Kleinig, W.; Krugmann, A.; Krumbolz, A. M.; Kvasil, J.; Mabiala, J.; Mira, J. P.; Nesterenko, V. O.; von Neumann-Cosel, P.; Neveling, R.; Papka, P.; Reinhard, P.-G.; Richter, A.; Sideras-Haddad, E.; Smit, F. D.; Steyn, G. F.; Swartz, J. A.; Tamii, A.; Usman, I. T.

    2018-04-01

    The phenomenon of fine structure of the Isoscalar Giant Quadrupole Resonance (ISGQR) has been studied with high energy-resolution proton inelastic scattering at iThemba LABS in the chain of stable even-mass Nd isotopes covering the transition from spherical to deformed ground states. A wavelet analysis of the background-subtracted spectra in the deformed 146, 148, 150Nd isotopes reveals characteristic scales in correspondence with scales obtained from a Skyrme RPA calculation using the SVmas10 parameterization. A semblance analysis shows that these scales arise from the energy shift between the main fragments of the K = 0 , 1 and K = 2 components.

  14. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  15. A Parameterization for the Triggering of Landscape Generated Moist Convection

    NASA Technical Reports Server (NTRS)

    Lynn, Barry H.; Tao, Wei-Kuo; Abramopoulos, Frank

    1998-01-01

    A set of relatively high resolution three-dimensional (3D) simulations were produced to investigate the triggering of moist convection by landscape generated mesoscale circulations. The local accumulated rainfall varied monotonically (linearly) with the size of individual landscape patches, demonstrating the need to develop a trigger function that is sensitive to the size of individual patches. A new triggering function that includes the effect of landscapes generated mesoscale circulations over patches of different sizes consists of a parcel's perturbation in vertical velocity (nu(sub 0)), temperature (theta(sub 0)), and moisture (q(sub 0)). Each variable in the triggering function was also sensitive to soil moisture gradients, atmospheric initial conditions, and moist processes. The parcel's vertical velocity, temperature, and moisture perturbation were partitioned into mesoscale and turbulent components. Budget equations were derived for theta(sub 0) and q(sub 0). Of the many terms in this set of budget equations, the turbulent, vertical flux of the mesoscale temperature and moisture contributed most to the triggering of moist convection through the impact of these fluxes on the parcel's temperature and moisture profile. These fluxes needed to be parameterized to obtain theta(sub 0) and q(sub 0). The mesoscale vertical velocity also affected the profile of nu(sub 0). We used similarity theory to parameterize these fluxes as well as the parcel's mesoscale vertical velocity.

  16. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    NASA Astrophysics Data System (ADS)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  17. Preferences for tap water attributes within couples: An exploration of alternative mixed logit parameterizations

    NASA Astrophysics Data System (ADS)

    Scarpa, Riccardo; Thiene, Mara; Hensher, David A.

    2012-01-01

    Preferences for attributes of complex goods may differ substantially among members of households. Some of these goods, such as tap water, are jointly supplied at the household level. This issue of jointness poses a series of theoretical and empirical challenges to economists engaged in empirical nonmarket valuation studies. While a series of results have already been obtained in the literature, the issue of how to empirically measure these differences, and how sensitive the results are to choice of model specification from the same data, is yet to be clearly understood. In this paper we use data from a widely employed form of stated preference survey for multiattribute goods, namely choice experiments. The salient feature of the data collection is that the same choice experiment was applied to both partners of established couples. The analysis focuses on models that simultaneously handle scale as well as preference heterogeneity in marginal rates of substitution (MRS), thereby isolating true differences between members of couples in their MRS, by removing interpersonal variation in scale. The models employed are different parameterizations of the mixed logit model, including the willingness to pay (WTP)-space model and the generalized multinomial logit model. We find that in this sample there is some evidence of significant statistical differences in values between women and men, but these are of small magnitude and only apply to a few attributes.

  18. Local Minima Free Parameterized Appearance Models

    PubMed Central

    Nguyen, Minh Hoai; De la Torre, Fernando

    2010-01-01

    Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750

  19. Parameterizing Aggregation Rates: Results of cold temperature ice-ash hydrometeor experiments

    NASA Astrophysics Data System (ADS)

    Courtland, L. M.; Dufek, J.; Mendez, J. S.; McAdams, J.

    2014-12-01

    Recent advances in the study of tephra aggregation have indicated that (i) far-field effects of tephra sedimentation are not adequately resolved without accounting for aggregation processes that preferentially remove the fine ash fraction of volcanic ejecta from the atmosphere as constituent pieces of larger particles, and (ii) the environmental conditions (e.g. humidity, temperature) prevalent in volcanic plumes may significantly alter the types of aggregation processes at work in different regions of the volcanic plume. The current research extends these findings to explore the role of ice-ash hydrometeor aggregation in various plume environments. Laboratory experiments utilizing an ice nucleation chamber allow us to parameterize tephra aggregation rates under the cold (0 to -50 C) conditions prevalent in the upper regions of volcanic plumes. We consider the interaction of ice-coated tephra of variable thickness grown in a controlled environment. The ice-ash hydrometers interact collisionally and the interaction is recorded by a number of instruments, including high speed video to determine if aggregation occurs. The electric charge on individual particles is examined before and after collision to examine the role of electrostatics in the aggregation process and to examine the charge exchange process. We are able to examine how sticking efficiency is related to both the relative abundance of ice on a particle as well as to the magnitude of the charge carried by the hydrometeor. We here present preliminary results of these experiments, the first to constrain aggregation efficiency of ice-ash hydrometeors, a parameter that will allow tephra dispersion models to use near-real-time meteorological data to better forecast particle residence time in the atmosphere.

  20. Influence of the vertical mixing parameterization on the modeling results of the Arctic Ocean hydrology

    NASA Astrophysics Data System (ADS)

    Iakshina, D. F.; Golubeva, E. N.

    2017-11-01

    The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.

  1. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  2. Finescale parameterizations of energy dissipation in a region of strong internal tides and sheared flow, the Lucky-Strike segment of the Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.

    2016-06-01

    The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.

  3. Dissipative particle dynamics parameterization and simulations to predict negative volume excess and structure of PEG and water mixtures

    NASA Astrophysics Data System (ADS)

    Kacar, Gokhan

    2017-12-01

    We report the results of dissipative particle dynamics (DPD) parameterization and simulations of a mixture of hydrophilic polymer, PEG 400, and water which are known to exhibit negative volume excess property upon mixing. The addition of a Morse potential to the conventional DPD potential mimics the hydrogen bond attraction, where the parameterization takes the internal chemistry of the beads into account. The results indicate that the mixing of PEG and water are maintained by the influence of hydrogen bonds, and the mesoscopic structure is characterized by the trade-off of enthalpic and entropic effects.

  4. Use of Cloud Computing to Calibrate a Highly Parameterized Model

    NASA Astrophysics Data System (ADS)

    Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.

    2012-12-01

    We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point

  5. Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling

    NASA Technical Reports Server (NTRS)

    Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.

    2014-01-01

    This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially

  6. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    NASA Astrophysics Data System (ADS)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very

  7. Parameterized hardware description as object oriented hardware model implementation

    NASA Astrophysics Data System (ADS)

    Drabik, Pawel K.

    2010-09-01

    The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.

  8. Rayleigh Scattering.

    ERIC Educational Resources Information Center

    Young, Andrew T.

    1982-01-01

    The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)

  9. Rainfall intensity effects on removal of fecal indicator bacteria from solid dairy manure applied over grass-covered soil.

    PubMed

    Blaustein, Ryan A; Hill, Robert L; Micallef, Shirley A; Shelton, Daniel R; Pachepsky, Yakov A

    2016-01-01

    The rainfall-induced release of pathogens and microbial indicators from land-applied manure and their subsequent removal with runoff and infiltration precedes the impairment of surface and groundwater resources. It has been assumed that rainfall intensity and changes in intensity during rainfall do not affect microbial removal when expressed as a function of rainfall depth. The objective of this work was to test this assumption by measuring the removal of Escherichia coli, enterococci, total coliforms, and chloride ion from dairy manure applied in soil boxes containing fescue, under 3, 6, and 9cmh(-1) of rainfall. Runoff and leachate were collected at increasing time intervals during rainfall, and post-rainfall soil samples were taken at 0, 2, 5, and 10cm depths. Three kinetic-based models were fitted to the data on manure-constituent removal with runoff. Rainfall intensity appeared to have positive effects on rainwater partitioning to runoff, and removal with this effluent type occurred in two stages. While rainfall intensity generally did not impact the parameters of runoff-removal models, it had significant, inverse effects on the numbers of bacteria remaining in soil after rainfall. As rainfall intensity and soil profile depth increased, the numbers of indicator bacteria tended to decrease. The cumulative removal of E. coli from manure exceeded that of enterococci, especially in the form of removal with infiltration. This work may be used to improve the parameterization of models for bacteria removal with runoff and to advance estimations of depths of bacteria removal with infiltration, both of which are critical to risk assessment of microbial fate and transport in the environment. Published by Elsevier B.V.

  10. Acoustical scattering by multilayer spherical elastic scatterer containing electrorheological layer.

    PubMed

    Cai, Liang-Wu; Dacol, Dacio K; Orris, Gregory J; Calvo, David C; Nicholas, Michael

    2011-01-01

    A computational procedure for analyzing acoustical scattering by multilayer concentric spherical scatterers having an arbitrary mixture of acoustic and elastic materials is proposed. The procedure is then used to analyze the scattering by a spherical scatterer consisting of a solid shell and a solid core encasing an electrorheological (ER) fluid layer, and the tunability in the scattering characteristics afforded by the ER layer is explored numerically. Tunable scatterers with two different ER fluids are analyzed. One, corn starch in peanut oil, shows that a significant increase in scattering cross-section is possible in moderate frequencies. Another, fine poly-methyl methacrylate (PMMA) beads in dodecane, shows only slight change in scattering cross-sections overall. But, when the shell is thin, a noticeable local resonance peak can appear near ka=1, and this resonance can be turned on or off by the external electric field.

  11. Effectiveness of removals of the invasive lionfish: how many dives are needed to deplete a reef?

    PubMed

    Usseglio, Paolo; Selwyn, Jason D; Downey-Wall, Alan M; Hogan, J Derek

    2017-01-01

    Introduced Indo-Pacific red lionfish ( Pterois volitans/miles ) have spread throughout the greater Caribbean and are associated with a number of negative impacts on reef ecosystems. Human interventions, in the form of culling activities, are becoming common to reduce their numbers and mitigate the negative effects associated with the invasion. However, marine managers must often decide how to best allocate limited resources. Previous work has identified the population size thresholds needed to limit the negative impacts of lionfish. Here we develop a framework that allows managers to predict the removal effort required to achieve specific targets (represented as the percent of lionfish remaining on the reef). We found an important trade-off between time spent removing and achieving an increasingly smaller lionfish density. The model used in our suggested framework requires relatively little data to parameterize, allowing its use with already existing data, permitting managers to tailor their culling strategy to maximize efficiency and rate of success.

  12. Effectiveness of removals of the invasive lionfish: how many dives are needed to deplete a reef?

    PubMed Central

    Downey-Wall, Alan M.; Hogan, J. Derek

    2017-01-01

    Introduced Indo-Pacific red lionfish (Pterois volitans/miles) have spread throughout the greater Caribbean and are associated with a number of negative impacts on reef ecosystems. Human interventions, in the form of culling activities, are becoming common to reduce their numbers and mitigate the negative effects associated with the invasion. However, marine managers must often decide how to best allocate limited resources. Previous work has identified the population size thresholds needed to limit the negative impacts of lionfish. Here we develop a framework that allows managers to predict the removal effort required to achieve specific targets (represented as the percent of lionfish remaining on the reef). We found an important trade-off between time spent removing and achieving an increasingly smaller lionfish density. The model used in our suggested framework requires relatively little data to parameterize, allowing its use with already existing data, permitting managers to tailor their culling strategy to maximize efficiency and rate of success. PMID:28243542

  13. Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai

    In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less

  14. Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)

    DOE PAGES

    Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai

    2015-02-11

    In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less

  15. Impact of APEX parameterization and soil data on runoff, sediment, and nutrients transport assessment

    USDA-ARS?s Scientific Manuscript database

    Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...

  16. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval

    PubMed Central

    Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.

    2013-01-01

    Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results

  17. Collaborative Research: Using ARM Observations to Evaluate GCM Cloud Statistics for Development of Stochastic Cloud-Radiation Parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Samuel S. P.

    2013-09-01

    The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been an interdisciplinary collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen). The motivation and long-term goal underlying this work is the utilization of stochastic radiative transfer theory (Lane-Veron and Somerville, 2004; Lane et al., 2002) to develop a new class of parametric representations of cloud-radiation interactions and closely related processes for atmospheric models. The theoretical advantage of the stochastic approach is that it can accurately calculate the radiative heating rates through a broken cloud layer without requiring an exact description of the cloud geometry.« less

  18. Parameterizing the Dust Around Herbig Ae/Be Stars: Multiwavelength Imaging, Radiative Transfer Modeling, and Near-Infrared Instrumentation

    NASA Astrophysics Data System (ADS)

    Doering, Ryan L.

    2009-01-01

    Determining Herbig Ae/Be star dust parameters provides constraints for planet formation theory, and yields information about the matter around intermediate-mass stars as they approach the main sequence. In this dissertation talk, I present the results of a multiwavelength imaging and radiative transfer modeling study of Herbig Ae/Be stars, and a near-infrared instrumentation project, with the aim of parameterizing the dust in these systems. The Hubble Space Telescope was used to search for optical light scattered by dust in a sample of young stars. This survey provided the first scattered-light image of the circumstellar environment around the Herbig Ae/Be star HD 97048. Structure is observed in the dust distribution similar to that seen in other Herbig Ae/Be systems. A ground-based near-infrared imaging study of Herbig Ae/Be candidates was also carried out. Photometry was collected for spectral energy distribution construction, and binary candidates were resolved. Detailed dust modeling of HD 97048 and HD 100546 was carried out with a two-component geometry consisting of a flared disk and an extended envelope. The models achieve a reasonable global fit to the spectral energy distributions, and produce images with the desired geometry. The disk midplane densities are found to go as r-0.5 and r-1.8, giving disk dust masses of 3.0 x 10-4 and 5.9 x 10-5 Msun for HD 97048 and HD 100546, respectively. A gas-to-dust mass ratio lower limit of 3.2 was calculated for HD 97048. Furthermore, I have participated in the development of the WIYN High Resolution Infrared Camera. The instrument operates in the near-infrared ( 0.8 - 2.5 microns), includes 13 filters, and has a pixel size of 0.1 arcsec, resulting in a field of view of 3 arcmin x 3 arcmin. An angular resolution of 0.25 arcsec is anticipated. I provide an overview of the instrument and report performance results.

  19. Impact of a Stochastic Parameterization Scheme on El Nino-Southern Oscillation in the Community Climate System Model

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.

    2017-12-01

    Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.

  20. At-edge minima in elastic photon scattering amplitudes for dilute aqueous ions

    NASA Astrophysics Data System (ADS)

    Bradley, D. A.; Hugtenburg, R. P.; Yusoff, A. L.

    2006-11-01

    Elastic photon scattering and absorption in the vicinity of core atomic orbital energies give rise to resonances in the elastic photon scattering cross-section. Of interest is whether a dilute-ion aqueous system provides an environment suitable for testing independent particle approximation (IPA) predictions. Predictions of the energy of these resonances have been determined for a Dirac-Slater exchange potential with a Latter tail. At BM28 (ESRF), tuneable X-rays were obtained at eV resolution using a 1 1 1 Si monochromator. From target systems including Cu 2+ and Zn 2+, the X-rays were scattered through high angle from an aqueous medium contained in a thin Perspex cell provided with 8 μm kaplan windows. An energy resolution of ˜500 eV from the HPGe detector was adequate to separate the elastic scattering signal from K α radiation but not from Compton or K β contributions. The Compton contribution from the medium was removed assuming validity of the relativistic impulse approximation. The contribution due to K β fluorescence and the resonant X-ray Raman scattering process were handled by assuming the branching ratio for K α and K β contributions to be constant and to be accurately described by fluorescent yields measured above edge. At ionic concentrations ranging from 0.01 to 0.1 mol/l, resonance structures accord with predictions of elastic scattering cross-sections calculated within IPA. Amplitudes calculated using modified form-factors and anomalous scatter factors computed from a Dirac-Slater exchange potential were convolved with a Lorentzian of several eV (FWHM).

  1. Strong parameterization and coordination encirclements of graph of Penrose tiling vertices

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Maleev, A. V.

    2017-07-01

    The coordination encirclements in a graph of Penrose tiling vertices have been investigated based on the analysis of vertice parameters. A strong parameterization of these vertices is developed in the form of a tiling of a parameter set in the region corresponding to different first coordination encirclements of vertices. An algorithm for constructing tilings of a set of parameters determining different coordination encirclements in a graph of Penrose tiling vertices of order n is proposed.

  2. Parameterizing Size Distribution in Ice Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD).more » Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment

  3. Implementation of a Brown Carbon Parameterization in the Community Earth System Model (CESM): Model Validation, Estimation of Brown Carbon Radiative Effect, and Climate Impact

    NASA Astrophysics Data System (ADS)

    Brown, Hunter Y.

    A recent development in the representation of aerosols in climate models is the realization that some components of organic carbon (OC), emitted from biomass and biofuel burning, can have a significant contribution to short-wave radiation absorption in the atmosphere. The absorbing fraction of OC is referred to as brown carbon (BrC). This study introduces one of the first implementations of BrC into the Community Earth System Model (CESM), using a parameterization for BrC absorption described in Saleh et al. (2014). 9-year experiments are run (2003-2011) with prescribed emissions and sea surface temperatures to analyze the effect of BrC in the atmosphere. Model validation is conducted via model comparison to single-scatter albedo (SSA) and aerosol optical depth from the Aerosol Robotic Network (AERONET), as well as comparison with a laboratory derived parameterization for SSA dependent on the (black carbon (BC))/(BC+OC) ratio in biomass burning emissions. These comparisons reveal a model underestimation of SSA in biomass burning regions for both default and BrC model runs. Global annual average radiative effects are calculated due to aerosol-radiation interactions (REari; 0.13+/-0.021 W m -2), aerosol-cloud interactions (REaci; 0.07+/-0.056 W m -2), and surface albedo change (REsac; -0.06+/-0.035 W m -2). REari is similar to other studies' estimations of BrC direct radiative effect, while REaci indicates a global reduction in low clouds due to the BrC semi-direct effect. REsac suggests increased surface albedo with BrC implementation due to modified snowfall, but does not take into account the warming effect of BrC on snow. Lastly, comparisons of BrC implementation approaches find that this implementation may do a better job of estimating BrC radiative effect in the Arctic regions than previous studies with CESM.

  4. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    NASA Astrophysics Data System (ADS)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  5. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  6. A comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-12-01

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud

  7. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGES

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  8. Removing Shape-Preserving Transformations in Square-Root Elastic (SRE) Framework for Shape Analysis of Curves

    PubMed Central

    Joshi, Shantanu H.; Klassen, Eric; Srivastava, Anuj; Jermyn, Ian

    2011-01-01

    This paper illustrates and extends an efficient framework, called the square-root-elastic (SRE) framework, for studying shapes of closed curves, that was first introduced in [2]. This framework combines the strengths of two important ideas - elastic shape metric and path-straightening methods - for finding geodesics in shape spaces of curves. The elastic metric allows for optimal matching of features between curves while path-straightening ensures that the algorithm results in geodesic paths. This paper extends this framework by removing two important shape preserving transformations: rotations and re-parameterizations, by forming quotient spaces and constructing geodesics on these quotient spaces. These ideas are demonstrated using experiments involving 2D and 3D curves. PMID:21738385

  9. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations tomore » the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as

  10. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang J.

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  11. Parameterizing Urban Canopy Layer transport in an Lagrangian Particle Dispersion Model

    NASA Astrophysics Data System (ADS)

    Stöckl, Stefan; Rotach, Mathias W.

    2016-04-01

    The percentage of people living in urban areas is rising worldwide, crossed 50% in 2007 and is even higher in developed countries. High population density and numerous sources of air pollution in close proximity can lead to health issues. Therefore it is important to understand the nature of urban pollutant dispersion. In the last decades this field has experienced considerable progress, however the influence of large roughness elements is complex and has as of yet not been completely described. Hence, this work studied urban particle dispersion close to source and ground. It used an existing, steady state, three-dimensional Lagrangian particle dispersion model, which includes Roughness Sublayer parameterizations of turbulence and flow. The model is valid for convective and neutral to stable conditions and uses the kernel method for concentration calculation. As most Lagrangian models, its lower boundary is the zero-plane displacement, which means that roughly the lower two-thirds of the mean building height are not included in the model. This missing layer roughly coincides with the Urban Canopy Layer. An earlier work "traps" particles hitting the lower model boundary for a recirculation period, which is calculated under the assumption of a vortex in skimming flow, before "releasing" them again. The authors hypothesize that improving the lower boundary condition by including Urban Canopy Layer transport could improve model predictions. This was tested herein by not only trapping the particles, but also advecting them with a mean, parameterized flow in the Urban Canopy Layer. Now the model calculates the trapping period based on either recirculation due to vortex motion in skimming flow regimes or vertical velocity if no vortex forms, depending on incidence angle of the wind on a randomly chosen street canyon. The influence of this modification, as well as the model's sensitivity to parameterization constants, was investigated. To reach this goal, the model was

  12. A New Parameterization of H2SO4/H2O Aerosol Composition: Atmospheric Implications

    NASA Technical Reports Server (NTRS)

    Tabazadeh, Azadeh; Toon, Owen B.; Clegg, Simon L.; Hamill, Patrick

    1997-01-01

    Recent results from a thermodynamic model of aqueous sulfuric acid are used to derive a new parameterization for the variation of sulfuric acid aerosol composition with temperature and relative humidity. This formulation is valid for relative humidities above 1 % in the temperature range of 185 to 260 K. An expression for calculating the vapor pressure of supercooled liquid water, consistent with the sulfuric acid model, is also presented. We show that the Steele and Hamill [1981] formulation underestimates the water partial pressure over aqueous H2SOI solutions by up to 12% at low temperatures. This difference results in a corresponding underestimate of the H2SO4 concentration in the aerosol by about 6 % of the weight percent at approximately 190 K. In addition, the relation commonly used for estimating the vapor pressure of H2O over supercooled liquid water differs by up to 10 % from our derived expression. The combined error can result in a 20 % underestimation of water activity over a H2SO4 solution droplet in the stratosphere, which has implications for the parameterization of heterogeneous reaction rates in stratospheric sulfuric acid aerosols. The influence of aerosol composition on the rate of homogeneous ice nucleation from a H2SO4 solution droplet is also discussed. This parameterization can also be used for homogeneous gas phase nucleation calculations of H2SO4 solution droplets under various environmental conditions such as in aircraft exhaust or in volcanic plumes.

  13. On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui

    We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.

  14. Simultaneous neutron scattering and Raman scattering.

    PubMed

    Adams, Mark A; Parker, Stewart F; Fernandez-Alonso, Felix; Cutler, David J; Hodges, Christopher; King, Andrew

    2009-07-01

    The capability to make simultaneous neutron and Raman scattering measurements at temperatures between 1.5 and 450 K has been developed. The samples to be investigated are attached to one end of a custom-made center-stick suitable for insertion into a 100 mm-bore cryostat. The other end of the center-stick is fiber-optically coupled to a Renishaw in Via Raman spectrometer incorporating a 300 mW Toptica 785 nm wavelength stabilized diode laser. The final path for the laser beam is approximately 1.3 m in vacuo within the center-stick followed by a focusing lens close to the sample. Raman scattering measurements with a resolution of 1 to 4 cm(-1) can be made over a wide range (100-3200 cm(-1)) at the same time as a variety of different types of neutron scattering measurements. In this work we highlight the use of inelastic neutron scattering and neutron diffraction in conjunction with the Raman for studies of the globular protein lysozyme.

  15. Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool

    USDA-ARS?s Scientific Manuscript database

    Guidelines have been developed to parameterize and validate the Agricultural Policy Environmental eXtender (APEX) to support the Nutrient Tracking Tool (NTT). This follow-up paper presents 1) a case study to illustrate how the developed guidelines are applied in a headwater watershed located in cent...

  16. Application of conditional simulation of heterogeneous rock properties to seismic scattering and attenuation analysis in gas hydrate reservoirs

    NASA Astrophysics Data System (ADS)

    Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd

    2012-02-01

    We present a conditional simulation algorithm to parameterize three-dimensional heterogeneities and construct heterogeneous petrophysical reservoir models. The models match the data at borehole locations, simulate heterogeneities at the same resolution as borehole logging data elsewhere in the model space, and simultaneously honor the correlations among multiple rock properties. The model provides a heterogeneous environment in which a variety of geophysical experiments can be simulated. This includes the estimation of petrophysical properties and the study of geophysical response to the heterogeneities. As an example, we model the elastic properties of a gas hydrate accumulation located at Mallik, Northwest Territories, Canada. The modeled properties include compressional and shear-wave velocities that primarily depend on the saturation of hydrate in the pore space of the subsurface lithologies. We introduce the conditional heterogeneous petrophysical models into a finite difference modeling program to study seismic scattering and attenuation due to multi-scale heterogeneity. Similarities between resonance scattering analysis of synthetic and field Vertical Seismic Profile data reveal heterogeneity with a horizontal-scale of approximately 50 m in the shallow part of the gas hydrate interval. A cross-borehole numerical experiment demonstrates that apparent seismic energy loss can occur in a pure elastic medium without any intrinsic attenuation of hydrate-bearing sediments. This apparent attenuation is largely attributed to attenuative leaky mode propagation of seismic waves through large-scale gas hydrate occurrence as well as scattering from patchy distribution of gas hydrate.

  17. Aerodynamic Shape Optimization Design of Wing-Body Configuration Using a Hybrid FFD-RBF Parameterization Approach

    NASA Astrophysics Data System (ADS)

    Liu, Yuefeng; Duan, Zhuoyi; Chen, Song

    2017-10-01

    Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.

  18. Modeling and parameterization of horizontally inhomogeneous cloud radiative properties

    NASA Technical Reports Server (NTRS)

    Welch, R. M.

    1995-01-01

    One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.

  19. An experimental investigation of the angular scattering and backscattering behaviors of the simulated clouds of the outer planets

    NASA Technical Reports Server (NTRS)

    Sassen, K.

    1984-01-01

    A cryogenic, 50 liter volume Planetary Cloud Simulation Chamber has been constructed to permit the laboratory study of the cloud compositions which are likely to be found in the atmospheres of the outer planets. On the basis of available data, clouds composed of water ice, carbon dioxide, and liquid and solid ammonia and methane, both pure and in various mixtures, have been generated. Cloud microphysical observations have been permitted through the use of a cloud particle slide injector and photomicrography. Viewports in the lower chamber have enabled the collection of cloud backscattering data using 633 and 838 nm laser light, including linear depolarization ratios and complete Stokes parameterization. The considerable technological difficulties associated with the collection of angular scattering patterns within the chamber, however, could not be completely overcome.

  20. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  1. Regional Climate Model sesitivity to different parameterizations schemes with WRF over Spain

    NASA Astrophysics Data System (ADS)

    García-Valdecasas Ojeda, Matilde; Raquel Gámiz-Fortis, Sonia; Hidalgo-Muñoz, Jose Manuel; Argüeso, Daniel; Castro-Díez, Yolanda; Jesús Esteban-Parra, María

    2015-04-01

    The ability of the Weather Research and Forecasting (WRF) model to simulate the regional climate depends on the selection of an adequate combination of parameterization schemes. This study assesses WRF sensitivity to different parameterizations using six different runs that combined three cumulus, two microphysics and three surface/planetary boundary layer schemes in a topographically complex region such as Spain, for the period 1995-1996. Each of the simulations spanned a period of two years, and were carried out at a spatial resolution of 0.088° over a domain encompassing the Iberian Peninsula and nested in the coarser EURO-CORDEX domain (0.44° resolution). The experiments were driven by Interim ECMWF Re-Analysis (ERA-Interim) data. In addition, two different spectral nudging configurations were also analysed. The simulated precipitation and maximum and minimum temperatures from WRF were compared with Spain02 version 4 observational gridded datasets. The comparison was performed at different time scales with the purpose of evaluating the model capability to capture mean values and high-order statistics. ERA-Interim data was also compared with observations to determine the improvement obtained using dynamical downscaling with respect to the driving data. For this purpose, several parameters were analysed by directly comparing grid-points. On the other hand, the observational gridded data were grouped using a multistep regionalization to facilitate the comparison in term of monthly annual cycle and the percentiles of daily values analysed. The results confirm that no configuration performs best, but some combinations that produce better results could be chosen. Concerning temperatures, WRF provides an improvement over ERA-Interim. Overall, model outputs reduce the biases and the RMSE for monthly-mean maximum and minimum temperatures and are higher correlated with observations than ERA-Interim. The analysis shows that the Yonsei University planetary boundary layer

  2. Assessment of two physical parameterization schemes for desert dust emissions in an atmospheric chemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.

    2012-04-01

    Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.

  3. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate

  4. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  5. Stellar Atmospheric Parameterization Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Pan, R. Y.; Li, X. R.

    2016-07-01

    Deep learning is a typical learning method widely studied in machine learning, pattern recognition, and artificial intelligence. This work investigates the stellar atmospheric parameterization problem by constructing a deep neural network with five layers. The proposed scheme is evaluated on both real spectra from Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with Kurucz's New Opacity Distribution Function (NEWODF) model. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for the effective temperature (T_{eff}/K), 0.0058 for lg (T_{eff}/K), 0.1706 for surface gravity (lg (g/(cm\\cdot s^{-2}))), and 0.1294 dex for metallicity ([Fe/H]), respectively; On the theoretic spectra, the MAEs are 15.34 for T_{eff}/K, 0.0011 for lg (T_{eff}/K), 0.0214 for lg (g/(cm\\cdot s^{-2})), and 0.0121 dex for [Fe/H], respectively.

  6. Parameterized Linear Longitudinal Airship Model

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  7. The multifacet graphically contracted function method. II. A general procedure for the parameterization of orthogonal matrices and its application to arc factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely

    2014-08-14

    Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ R {sup m×n} in terms of the minimal number of essential parameters (φ). Both square n = m and rectangular n < m situations are examined. Two separate kinds of parameterizations are considered, one in which the individual columns of Q are distinct, and the other in which only Span(Q) is significant. The latter is relevant to chemical applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations aremore » represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and (φ) parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.« less

  8. Light scatter on the surface of AcrySof intraocular lenses: part I. Analysis of lenses retrieved from pseudophakic postmortem human eyes.

    PubMed

    Yaguchi, Shigeo; Nishihara, Hitoshi; Kambhiranond, Waraporn; Stanley, Daniel; Apple, David J

    2008-01-01

    To investigate the cause of light scatter measured on the surface of AcrySof intraocular lenses (Alcon Laboratories, Inc., Fort Worth, TX) retrieved from pseudophakic postmortem human eyes. Ten intraocular lenses (Alcon AcrySofModel MA60BM) were retrieved postmortem and analyzed for light scatter before and after removal of surface-bound biofilms. Six of the 10 lenses exhibited light scatter that was clearly above baseline levels. In these 6 lenses, both peak and average pixel density were reduced by approximately 80% after surface cleaning. The current study demonstrates that a coating deposited in vivo on the lens surface is responsible for the light scatter observed when incident light is applied.

  9. Spectral structure of laser light scattering revisited: bandwidths of nonresonant scattering lidars.

    PubMed

    She, C Y

    2001-09-20

    It is well known that scattering lidars, i.e., Mie, aerosol-wind, Rayleigh, high-spectral-resolution, molecular-wind, rotational Raman, and vibrational Raman lidars, are workhorses for probing atmospheric properties, including the backscatter ratio, aerosol extinction coefficient, temperature, pressure, density, and winds. The spectral structure of molecular scattering (strength and bandwidth) and its constituent spectra associated with Rayleigh and vibrational Raman scattering are reviewed. Revisiting the correct name by distinguishing Cabannes scattering from Rayleigh scattering, and sharpening the definition of each scattering component in the Rayleigh scattering spectrum, the review allows a systematic, logical, and useful comparison in strength and bandwidth between each scattering component and in receiver bandwidths (for both nighttime and daytime operation) between the various scattering lidars for atmospheric sensing.

  10. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the

  11. CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel

    2015-12-20

    Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less

  12. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  13. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of

  14. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    NASA Technical Reports Server (NTRS)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  15. Analyses of the stratospheric dynamics simulated by a GCM with a stochastic nonorographic gravity wave parameterization

    NASA Astrophysics Data System (ADS)

    Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo

    2016-04-01

    The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two

  16. A Discrete Scatterer Technique for Evaluating Electromagnetic Scattering from Trees

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7799 ● SEP 2016 US Army Research Laboratory A Discrete Scatterer Technique for Evaluating Electromagnetic Scattering from...longer needed. Do not return it to the originator. ARL-TR-7799 ● SEP 2016 US Army Research Laboratory A Discrete Scatterer Technique...DD-MM-YYYY) September 2016 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 2015–2016 4. TITLE AND SUBTITLE A Discrete Scatterer

  17. Synthesizing 3D Surfaces from Parameterized Strip Charts

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri

    2004-01-01

    We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.

  18. Parameterization of norfolk sandy loam properties for stochastic modeling of light in-wheel motor UGV

    USDA-ARS?s Scientific Manuscript database

    To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...

  19. Land cover characterization and land surface parameterization research

    USGS Publications Warehouse

    Steyaert, Louis T.; Loveland, Thomas R.; Parton, William J.

    1997-01-01

    The understanding of land surface processes and their parameterization in atmospheric, hydrologic, and ecosystem models has been a dominant research theme over the past decade. For example, many studies have demonstrated the key role of land cover characteristics as controlling factors in determining land surface processes, such as the exchange of water, energy, carbon, and trace gases between the land surface and the lower atmosphere. The requirements for multiresolution land cover characteristics data to support coupled-systems modeling have also been well documented, including the need for data on land cover type, land use, and many seasonally variable land cover characteristics, such as albedo, leaf area index, canopy conductance, surface roughness, and net primary productivity. Recently, the developers of land data have worked more closely with the land surface process modelers in these efforts.

  20. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  1. Implementing a warm cloud microphysics parameterization for convective clouds in NCAR CESM

    NASA Astrophysics Data System (ADS)

    Shiu, C.; Chen, Y.; Chen, W.; Li, J. F.; Tsai, I.; Chen, J.; Hsu, H.

    2013-12-01

    Most of cumulus convection schemes use simple empirical approaches to convert cloud liquid mass to rain water or cloud ice to snow e.g. using a constant autoconversion rate and dividing cloud liquid mass into cloud water and ice as function of air temperature (e.g. Zhang and McFarlane scheme in NCAR CAM model). There are few studies trying to use cloud microphysical schemes to better simulate such precipitation processes in the convective schemes of global models (e.g. Lohmann [2008] and Song, Zhang, and Li [2012]). A two-moment warm cloud parameterization (i.e. Chen and Liu [2004]) is implemented into the deep convection scheme of CAM5.2 of CESM model for treatment of conversion of cloud liquid water to rain water. Short-term AMIP type global simulations are conducted to evaluate the possible impacts from the modification of this physical parameterization. Simulated results are further compared to observational results from AMWG diagnostic package and CloudSAT data sets. Several sensitivity tests regarding to changes in cloud top droplet concentration (here as a rough testing for aerosol indirect effects) and changes in detrained cloud size of convective cloud ice are also carried out to understand their possible impacts on the cloud and precipitation simulations.

  2. A~comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-06-01

    A new heterogeneous ice nucleation parameterization that covers a~wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by ns, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant ns, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new ns parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.

  3. A Comprehensive Parameterization of Heterogeneous Ice Nucleation of Dust Surrogate: Laboratory Study with Hematite Particles and Its Application to Atmospheric Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiranuma, Naruki; Paukert, Marco; Steinke, Isabelle

    2014-12-10

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 °C to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by n s, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RH ice) in the chamber. Our measurementsmore » showed several different pathways to nucleate ice depending on T and RH ice conditions. For instance, almost independent freezing was observed at -60 °C < T < -50 °C, where RH ice explicitly controlled ice nucleation efficiency, while both T and RH ice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant n s, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new n s parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.« less

  4. Orbital Dynamics of Exomoons During Planet–Planet Scattering

    NASA Astrophysics Data System (ADS)

    Hong, Yu-Cian; Lunine, Jonathan I.; Nicholson, Philip; Raymond, Sean N.

    2018-04-01

    Planet–planet scattering is the leading mechanism to explain the broad eccentricity distribution of observed giant exoplanets. Here we study the orbital stability of primordial giant planet moons in this scenario. We use N-body simulations including realistic oblateness and evolving spin evolution for the giant planets. We find that the vast majority (~80%–90% across all our simulations) of orbital parameter space for moons is destabilized. There is a strong radial dependence, as moons past are systematically removed. Closer-in moons on Galilean-moon-like orbits (<0.04 R Hill) have a good (~20%–40%) chance of survival. Destabilized moons may undergo a collision with the star or a planet, be ejected from the system, be captured by another planet, be ejected but still orbiting its free-floating host planet, or survive on heliocentric orbits as "planets." The survival rate of moons increases with the host planet mass but is independent of the planet's final (post-scattering) orbits. Based on our simulations, we predict the existence of an abundant galactic population of free-floating (former) moons.

  5. Merged data models for multi-parameterized querying: Spectral data base meets GIS-based map archive

    NASA Astrophysics Data System (ADS)

    Naß, A.; D'Amore, M.; Helbert, J.

    2017-09-01

    Current and upcoming planetary missions deliver a huge amount of different data (remote sensing data, in-situ data, and derived products). Within this contribution present how different data (bases) can be managed and merged, to enable multi-parameterized querying based on the constant spatial context.

  6. Electromagnetic inverse scattering

    NASA Technical Reports Server (NTRS)

    Bojarski, N. N.

    1972-01-01

    A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.

  7. Parameterization and Validation of an Integrated Electro-Thermal LFP Battery Model

    DTIC Science & Technology

    2012-01-01

    integrated electro- thermal model for an A123 26650 LiFePO4 battery is presented. The electrical dynamics of the cell are described by an equivalent...the parameterization of an integrated electro-thermal model for an A123 26650 LiFePO4 battery is presented. The electrical dynamics of the cell are...the average of the charge and discharge curves taken at very low current (C/20), since the LiFePO4 cell chemistry is known to yield a hysteresis effect

  8. On the sensitivity of mesoscale models to surface-layer parameterization constants

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Pielke, R. A.

    1989-09-01

    The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.

  9. Parameterizing Gravity Waves and Understanding Their Impacts on Venus' Upper Atmosphere

    NASA Technical Reports Server (NTRS)

    Brecht, A. S.; Bougher, S. W.; Yigit, Erdal

    2018-01-01

    The complexity of Venus’ upper atmospheric circulation is still being investigated. Simulations of Venus’ upper atmosphere largely depend on the utility of Rayleigh Friction (RF) as a driver and necessary process to reproduce observations (i.e. temperature, density, nightglow emission). Currently, there are additional observations which provide more constraints to help characterize the driver(s) of the circulation. This work will largely focus on the impact parameterized gravity waves have on Venus’ upper atmosphere circulation within a three dimensional hydrodynamic model (Venus Thermospheric General Circulation Model).

  10. Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.

    2009-01-01

    Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud

  11. Metal Sorbing Vesicles: Light Scattering Characterization and Metal Sorbtion Behavior.

    NASA Astrophysics Data System (ADS)

    van Zanten, John Hollis

    1992-01-01

    The research described herein consisted of two parts: light scattering characterization of vesicles and kinetic investigations of metal sorbing vesicles. Static light scattering techniques can be used to determine the geometric size, shape and apparent molecular weight of phosphatidylcholine vesicles in aqueous suspension. A Rayleigh-Gans-Debye (RGD) approximation analysis of multiangle scattered light intensity data yields the size and degree of polydispersity of the vesicles in solution, while the Zimm plot technique provides the radius of gyration and apparent weight-average molecular weight. Together the RGD approximation and Zimm plots can be used to confirm the geometric shape of vesicles and can give a good estimate of the vesicle wall thickness in some cases. Vesicles varying from 40 to 115 nm in diameter have been characterized effectively. The static light scattering measurements indicate that, as expected, phosphatidylcholine vesicles in this size range scatter light as isotropic hollow spheres. Additionally, static and dynamic light scattering measurements have been made and compared with one another. The values for geometric radii determined by static light scattering typically agree with those estimated by dynamic light scattering to within a few percent. Interestingly however, dynamic measurements suggest that there is a significant degree of polydispersity present in the vesicle dispersions, while static measurements indicate near size monodisperse dispersions. Metal sorbing vesicles which harbor ionophores, such as antibiotic A23187 and synthetic carriers, in their bilayer membranes have been produced. These vesicles also encapsulate the chelating compound, nitrilotriacetate, to provide the driving force for metal ion uptake. Very dilute dispersions (on the order of 0.03% w/v) of these metal sorbing vesicles were capable of removing Cd ^{2+} and Pb^{2+ } from dilute aqueous solution (5 ppm and less) and concentrating these metal ions several

  12. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    NASA Astrophysics Data System (ADS)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  13. Actual and Idealized Crystal Field Parameterizations for the Uranium Ions in UF 4

    NASA Astrophysics Data System (ADS)

    Gajek, Z.; Mulak, J.; Krupa, J. C.

    1993-12-01

    The crystal field parameters for the actual coordination symmetries of the uranium ions in UF 4, C2 and C1, and for their idealizations to D2, C2 v , D4, D4 d , and the Archimedean antiprism point symmetries are given. They have been calculated by means of both the perturbative ab initio model and the angular overlap model and are referenced to the recent results fitted by Carnall's group. The equivalency of some different sets of parameters has been verified with the standardization procedure. The adequacy of several idealized approaches has been tested by comparison of the corresponding splitting patterns of the 3H 4 ground state. Our results support the parameterization given by Carnall. Furthermore, the parameterization of the crystal field potential and the splitting diagram for the symmetryless uranium ion U( C1) are given. Having at our disposal the crystal field splittings for the two kinds of uranium ions in UF 4, U( C2) and U( C1), we calculate the model plots of the paramagnetic susceptibility χ( T) and the magnetic entropy associated with the Schottky anomaly Δ S( T) for UF 4.

  14. SENSITIVITY OF THE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION MULTILAYER MODEL TO INSTRUMENT ERROR AND PARAMETERIZATION UNCERTAINTY

    EPA Science Inventory

    The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...

  15. Run-up parameterization and beach vulnerability assessment on a barrier island: a downscaling approach

    NASA Astrophysics Data System (ADS)

    Medellín, G.; Brinkkemper, J. A.; Torres-Freyermuth, A.; Appendini, C. M.; Mendoza, E. T.; Salles, P.

    2016-01-01

    We present a downscaling approach for the study of wave-induced extreme water levels at a location on a barrier island in Yucatán (Mexico). Wave information from a 30-year wave hindcast is validated with in situ measurements at 8 m water depth. The maximum dissimilarity algorithm is employed for the selection of 600 representative cases, encompassing different combinations of wave characteristics and tidal level. The selected cases are propagated from 8 m water depth to the shore using the coupling of a third-generation wave model and a phase-resolving non-hydrostatic nonlinear shallow-water equation model. Extreme wave run-up, R2%, is estimated for the simulated cases and can be further employed to reconstruct the 30-year time series using an interpolation algorithm. Downscaling results show run-up saturation during more energetic wave conditions and modulation owing to tides. The latter suggests that the R2% can be parameterized using a hyperbolic-like formulation with dependency on both wave height and tidal level. The new parametric formulation is in agreement with the downscaling results (r2 = 0.78), allowing a fast calculation of wave-induced extreme water levels at this location. Finally, an assessment of beach vulnerability to wave-induced extreme water levels is conducted at the study area by employing the two approaches (reconstruction/parameterization) and a storm impact scale. The 30-year extreme water level hindcast allows the calculation of beach vulnerability as a function of return periods. It is shown that the downscaling-derived parameterization provides reasonable results as compared with the numerical approach. This methodology can be extended to other locations and can be further improved by incorporating the storm surge contributions to the extreme water level.

  16. Detection of Objects Hidden in Highly Scattering Media Using Time-Gated Imaging Methods

    NASA Technical Reports Server (NTRS)

    Galland, Pierre A.; Wang, L.; Liang, X.; Ho, P. P.; Alfano, R. R.

    2000-01-01

    Non-intrusive and non-invasive optical imaging techniques has generated great interest among researchers for their potential applications to biological study, device characterization, surface defect detection, and jet fuel dynamics. Non-linear optical parametric amplification gate (NLOPG) has been used to detect back-scattered images of objects hidden in diluted Intralipid solutions. To directly detect objects hidden in highly scattering media, the diffusive component of light needs to be sorted out from early arrived ballistic and snake photons. In an optical imaging system, images are collected in transmission or back-scattered geometry. The early arrival photons in the transmission approach, always carry the direct information of the hidden object embedded in the turbid medium. In the back-scattered approach, the result is not so forth coming. In the presence of a scattering host, the first arrival photons in back-scattered approach will be directly photons from the host material. In the presentation, NLOPG was applied to acquire time resolved back-scattered images under the phase matching condition. A time-gated amplified signal was obtained through this NLOPG process. The system's gain was approximately 100 times. The time-gate was achieved through phase matching condition where only coherent photons retain their phase. As a result, the diffusive photons, which were the primary contributor to the background, were removed. With a large dynamic range and high resolution, time-gated early light imaging has the potential for improving rocket/aircraft design by determining jets shape and particle sizes. Refinements to these techniques may enable drop size measurements in the highly scattering, optically dense region of multi-element rocket injectors. These types of measurements should greatly enhance the design of stable, and higher performing rocket engines.

  17. Polarimetric scattering from layered media with multiple species of scatterers

    NASA Technical Reports Server (NTRS)

    Nghiem, S. V.; Kwok, R.; Yueh, S. H.; Kong, J. A.; Hsu, C. C.; Tassoudji, M. A.; Shin, R. T.

    1995-01-01

    Geophysical media are usually heterogeneous and contain multiple species of scatterers. In this paper a model is presented to calculate effective permittivities and polarimetric backscattering coefficients of multispecies-layered media. The same physical description is consistently used in the derivation of both permittivities and scattering coefficients. The strong permittivity fluctuation theory is extended to account for the multiple species of scatterers with a general ellipsoidal shape whose orientations are randomly distributed. Under the distorted Born approximation, polarimetric scattering coefficients are obtained. These calculations are applicable to the special cases of spheroidal and spherical scatterers. The model is used to study effects of scatterer shapes and multispecies mixtures on polarimetric signatures of heterogeneous media. The multispecies model accounts for moisture content in scattering media such as snowpack in an ice sheet. The results indicate a high sensitivity of backscatter to moisture with a stronger dependence for drier snow and ice grain size is important to the backscatter. For frost-covered saline ice, model results for bare ice are compared with measured data at C band and then the frost flower formation is simulated with a layer of fanlike ice crystals including brine infiltration over a rough interface. The results with the frost cover suggest a significant increase in scattering coefficients and a polarimetric signature closer to isotropic characteristics compared to the thin saline ice case.

  18. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Seasonal Parameterizations of the Tau-Omega Model Using the ComRAD Ground-Based SMAP Simulator

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Joseph, A.; Srivastava, P.; Cosh, M.; Lang, R.

    2014-01-01

    NASA's Soil Moisture Active Passive (SMAP) mission is scheduled for launch in November 2014. In the prelaunch time frame, the SMAP team has focused on improving retrieval algorithms for the various SMAP baseline data products. The SMAP passive-only soil moisture product depends on accurate parameterization of the tau-omega model to achieve the required accuracy in soil moisture retrieval. During a field experiment (APEX12) conducted in the summer of 2012 under dry conditions in Maryland, the Combined Radar/Radiometer (ComRAD) truck-based SMAP simulator collected active/passive microwave time series data at the SMAP incident angle of 40 degrees over corn and soybeans throughout the crop growth cycle. A similar experiment was conducted only over corn in 2002 under normal moist conditions. Data from these two experiments will be analyzed and compared to evaluate how changes in vegetation conditions throughout the growing season in both a drought and normal year can affect parameterizations in the tau-omega model for more accurate soil moisture retrieval.

  20. Angular-domain scattering interferometry.

    PubMed

    Shipp, Dustin W; Qian, Ruobing; Berger, Andrew J

    2013-11-15

    We present an angular-scattering optical method that is capable of measuring the mean size of scatterers in static ensembles within a field of view less than 20 μm in diameter. Using interferometry, the method overcomes the inability of intensity-based models to tolerate the large speckle grains associated with such small illumination areas. By first estimating each scatterer's location, the method can model between-scatterer interference as well as traditional single-particle Mie scattering. Direct angular-domain measurements provide finer angular resolution than digitally transformed image-plane recordings. This increases sensitivity to size-dependent scattering features, enabling more robust size estimates. The sensitivity of these angular-scattering measurements to various sizes of polystyrene beads is demonstrated. Interferometry also allows recovery of the full complex scattered field, including a size-dependent phase profile in the angular-scattering pattern.