Grogan, Brandon Robert
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Grogan, Brandon R
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Comment on S-matrix parameterizations in NN-scattering
Mulders, P. J.
1981-08-01
The parameterization of the S-matrix used for the elastic part of the NN-scattering matrix in, for example, the Virginia Polytechnic Institute ineractive nucleon-nucleon program SAID, is not general enough to parameterize any 2 by 2 submatrix of a unitary matrix.
Lobato, I; Van Dyck, D
2015-08-01
The steadily improving experimental possibilities in instrumental resolution as in sensitivity and quantization of the data recording put increasingly higher demands on the precision of the scattering factors, which are the key ingredients for electron diffraction or high-resolution imaging simulation. In the present study, we will systematically investigate the accuracy of fitting of the main parameterizations of the electron scattering factor for the calculation of electron diffraction intensities. It is shown that the main parameterizations of the electron scattering factor are consistent to calculate electron diffraction intensities for thin specimens and low angle scattering. Parameterizations of the electron scattering factor with the correct asymptotic behavior (Lobato and Dyck [5], Kirkland [4], and Weickenmeier and Kohl [2]) produce similar results for both the undisplaced lattice model and the frozen phonon model, except for certain thicknesses and reflections.
Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.
1992-01-01
Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.
Parameterization of the scattering and absorption properties of individual ice crystals
Yang, Ping; Liou, K. N.; Wyser, Klaus; Mitchell, David
2000-02-27
We present parameterizations of the single-scattering properties for individual ice crystals of various habits based on the results computed from the accurate light scattering calculations. The projected area, volume, and single-scattering properties of ice crystals with various shapes and sizes are computed for 56 narrow spectral bands covering 0.2-5 {mu}m. The ice crystal habits considered in this study are hexagonal plates, solid and hollow columns, planar and spatial bullet rosette, and aggregates that are commonly observed in cirrus clouds. Using the observational relationships between the aspect ratios and the sizes of ice crystals, we can define the three-dimensional structure of these ice crystal habits with respect to their maximum dimensions for light scattering calculations. The volume and projected area of ice crystals, expressed in terms of the diameters of the corresponding equivalent spheres, are first parameterized by employing the ice crystal maximum dimensions. Further, various analytical expressions as functions of the effective dimensions of ice crystals have been developed to parameterize the extinction and absorption efficiencies, asymmetry factor, and the truncation of the forward peak energy in the phase function. The present parameterization scheme provides an efficient approach to obtain the basic scattering and absorption properties of nonspherical ice crystals. (c) 2000 American Geophysical Union.
NASA Astrophysics Data System (ADS)
Honeyager, Ryan
High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then
NASA Technical Reports Server (NTRS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.;
2016-01-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9- 02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 percent, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GCRT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction
Evaluating model parameterizations of submicron aerosol scattering and absorption with in situ data from ARCTAS 2008
NASA Astrophysics Data System (ADS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; Jimenez, Jose L.; Kondo, Yutaka; Sahu, Lokesh K.; Dibb, Jack E.; Wang, Chien
2016-07-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction
Mang, J.T.; Hjelm, R.P.; Skidmore, C.B.; Howe, P.M.
1996-07-01
High explosive materials used in the nuclear stockpile are composites of crystalline high explosives (HE) with binder materials, such as Estane. In such materials, there are naturally occurring density fluctuations (defects) due to cracks, internal (in the HE) and external (in the binder) voids and other artifacts of preparation. Changes in such defects due to material aging can affect the response of explosives due to shock, impact and thermal loading. Modeling efforts are attempting to provide quantitative descriptions of explosive response from the lowest ignition thresholds to the development of full blown detonations and explosions, however, adequate descriptions of these processes require accurate measurements of a number of structural parameters of the HE composite. Since different defects are believed to affect explosive sensitivity in different ways it is necessary to quantitatively differentiate between defect types. The authors report here preliminary results of SANS measurements on surrogates for HE materials. The objective of these measurements was to develop methodologies using SANS techniques to parameterize internal void size distributions in a surrogate material, sugar, to simulate an HE used in the stockpile, HMX. Sugar is a natural choice as a surrogate material, as it has the same crystal structure, has similar intragranular voids and has similar mechanical properties as HMX. It is used extensively as a mock material for explosives. Samples were used with two void size distributions: one with a sufficiently small mean particle size that only small occluded voids are present in significant concentrations, and one where the void sizes could be larger. By using methods in small-angle neutron scattering, they were able to isolate the scattering arising from particle-liquid interfaces and internal voids.
Laser scattering measurement for laser removal of graffiti
NASA Astrophysics Data System (ADS)
Tearasongsawat, Watcharawee; Kittiboonanan, Phumipat; Luengviriya, Chaiya; Ratanavis, Amarin
2015-07-01
In this contribution, a technical development of the laser scattering measurement for laser removal of graffiti is reported. This study concentrates on the removal of graffiti from metal surfaces. Four colored graffiti paints were applied to stainless steel samples. Cleaning efficiency was evaluated by the laser scattering system. In this study, an angular laser removal of graffiti was attempted to examine the removal process under practical conditions. A Q-switched Nd:YAG laser operating at 1.06 microns with the repetition rate of 1 Hz was used to remove graffiti from stainless steel samples. The laser fluence was investigated from 0.1 J/cm2 to 7 J/cm2. The laser parameters to achieve the removal effectiveness were determined by using the laser scattering system. This study strongly leads to further development of the potential online surface inspection for the removal of graffiti.
NASA Astrophysics Data System (ADS)
Räisänen, Petri
1999-02-01
The parameterization of cloud shortwave absorption poses a difficult problem in broadband radiation schemes that treat the near-IR region as a single interval. This problem arises because the spectral variation of the single-scattering co-albedo 1 of cloud droplets and ice crystals is enormous in the near-IR region, and because the cloud particle absorption is overlapped by sharply varying water vapor absorption. In this paper, several parameterization methods of cloud near-IR (0.68-4.00 m) 1 are intercompared using a large set of atmospheric columns generated by a GCM. The methods include 1) linear averaging of 1 , weighting with the TOA solar flux; 2) `thick averaging' by Edwards and Slingo; 3) Fouquart's formula, which presents water cloud near-IR 1 as a function of optical thickness; and 4) the `correlated ' technique by Espinoza and Harshvardhan. An extension of the correlated technique to ice clouds is suggested. In addition, a new `adaptive ' broadband parameterization technique is developed and tested. In this method, the near-IR 1 of a cloud layer is parameterized in terms of the cloud properties (phase, optical thickness, and effective particle size) and the properties of the overlying atmosphere (slant vapor path and clouds). Two slightly different versions of the method are considered.The results of the intercomparison indicate that the adaptive method yields higher accuracy than the other broadband techniques tested. Linear averaging is by far the least accurate method; in particular, it is shown that linear averaging of near-IR 1 can lead to substantially overestimated absorption in ice clouds also. However, when the near-IR region is subdivided into three bands, the combination of thick averaging for water clouds and linear averaging for ice clouds provides results superior to those of all the broadband methods.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
Scattering removal for finger-vein image restoration.
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy.
NASA Astrophysics Data System (ADS)
Pokhrel, Rudra P.; Wagner, Nick L.; Langridge, Justin M.; Lack, Daniel A.; Jayarathne, Thilina; Stone, Elizabeth A.; Stockwell, Chelsea E.; Yokelson, Robert J.; Murphy, Shane M.
2016-08-01
Single-scattering albedo (SSA) and absorption Ångström exponent (AAE) are two critical parameters in determining the impact of absorbing aerosol on the Earth's radiative balance. Aerosol emitted by biomass burning represent a significant fraction of absorbing aerosol globally, but it remains difficult to accurately predict SSA and AAE for biomass burning aerosol. Black carbon (BC), brown carbon (BrC), and non-absorbing coatings all make substantial contributions to the absorption coefficient of biomass burning aerosol. SSA and AAE cannot be directly predicted based on fuel type because they depend strongly on burn conditions. It has been suggested that SSA can be effectively parameterized via the modified combustion efficiency (MCE) of a biomass burning event and that this would be useful because emission factors for CO and CO2, from which MCE can be calculated, are available for a large number of fuels. Here we demonstrate, with data from the FLAME-4 experiment, that for a wide variety of globally relevant biomass fuels, over a range of combustion conditions, parameterizations of SSA and AAE based on the elemental carbon (EC) to organic carbon (OC) mass ratio are quantitatively superior to parameterizations based on MCE. We show that the EC / OC ratio and the ratio of EC / (EC + OC) both have significantly better correlations with SSA than MCE. Furthermore, the relationship of EC / (EC + OC) with SSA is linear. These improved parameterizations are significant because, similar to MCE, emission factors for EC (or black carbon) and OC are available for a wide range of biomass fuels. Fitting SSA with MCE yields correlation coefficients (Pearson's r) of ˜ 0.65 at the visible wavelengths of 405, 532, and 660 nm while fitting SSA with EC / OC or EC / (EC + OC) yields a Pearson's r of 0.94-0.97 at these same wavelengths. The strong correlation coefficient at 405 nm (r = 0.97) suggests that parameterizations based on EC / OC or EC / (EC + OC) have good predictive
NASA Astrophysics Data System (ADS)
Yu, Ting; Chaix, Jean-François; Komatitsch, Dimitri; Garnier, Vincent; Audibert, Lorenzo; Henault, Jean-Marie
2017-02-01
Multiple scattering is important when ultrasounds propagate in a heterogeneous medium such as concrete, the scatterer size of which is in the order of the wavelength. The aim of this work is to build a 2D numerical model of ultrasonic wave propagation integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering could be obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. After the creation of numerical model under several assumptions, its validation is completed in a case of scattering by one cylinder through the comparison with analytical solution. Two cases of multiple scattering by a set of cylinders at different concentrations are simulated to perform a parametric study (of frequency, scatterer concentration, scatterer size). The effective properties are compared with the predictions of Waterman-Truell model as well, to verify its validity.
Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.
Wang, Bo; Yang, Xiaolan
2015-04-01
The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study.
Information-theoretic wavelet noise removal for inverse elastic wave scattering theory
NASA Astrophysics Data System (ADS)
van Nevel, Alan J.; Defacio, Brian; Neal, Steven P.
1999-03-01
A discussion of noise removal in ultrasound (elastic wave) scattering for nondestructive evaluation is given. The methods used in this paper include a useful suboptimal Wiener filter, information theory and orthonormal wavelets. The multiresolution analysis (MRA), due to Mallat, is the key wavelet feature used here. Whereas Fourier transforms have a translational symmetry, wavelets have a dilation or affine symmetry which consists of the semi-direct product of a translation with a change of scale of the variable. The MRA describes the scale change features of orthonormal wavelet families. First, an empirical method of noise removal from scattered elastic waves using wavelets is shown to markedly improve the l1 and l2 error norms. This suggests that the wavelet scale can act as dial to ``tune out'' noise. Maximization of the Kullback-Liebler information is also shown to provide a scale-dependent noise removal technique that supports (but does not prove) the intuition that certain small energy coefficients that are retained contain large information content. The wavelet MRA thereby locates ``islands of information'' in the phase space of the signal. It is conjectured that this method holds more generally.
Removing source-side scattering for virtual deep seismic sounding (VDSS)
NASA Astrophysics Data System (ADS)
Yu, Chun-Quan; Chen, Wang-Ping; van der Hilst, Robert D.
2013-12-01
We present a method that extends both the applicability and the quality of virtual deep seismic sounding (VDSS)-a technique for estimating crustal thickness that is robust even if the crust-mantle transition is complex or the crustal thickness is large. The results are important for studies of crustal contributions to isostasy and for understanding dynamic topography due to mantle convection. VDSS uses S-to-P conversions beneath seismic stations as virtual sources for large, post-critical reflections off the Moho, that is, the seismic phase SsPmp. Original applications of VDSS rely on deep earthquakes as sources of illumination to circumvent strong, near-source scattering (e.g. depth phases) and are, therefore, limited by the uneven distribution of deep seismicity. The method presented here effectively removes effects of the earthquake source wavelet (SW, including complexities arising from long, complicated source time functions and near-source scattering) and can be applied to signal from shallow and deep earthquakes. It involves two steps. First, based on analyses of particle motion, we separate `pseudo-P' and `pseudo-S' wave trains from the vertical and the radial component of ground motion. The latter is then used as the appropriate reference time-series for the deconvolution of the vertical and the radial component of ground motion. Since the reference time-series contains both the SW and S-type signals due to scattering near the receiver, the deconvolution also effectively removes S-type multiples, such as the phase SsPms and related reverberations. Applying this method to synthetic seismograms verifies that it is robust in removing complex SWs, even in the presence of random or signal-generated noise. The method is further validated using data recorded by the Hi-CLIMB array from both deep and shallow earthquakes. Impulsive signals are now routinely achieved, significantly improving both the quality and quantity of results from VDSS.
A novel removable shield attached to C-arm units against scattered X-rays from a patient's side.
Mori, Hiroshige; Koshida, Kichiro; Ishigamori, Osamu; Matsubara, Kosuke
2014-08-01
We invented a drape-like shield against scattered X-rays that can safely come into contact with medical equipment or people during fluoroscopically guided procedures. The shield can be easily removed from a C-arm unit using one hand. We evaluated the use of the novel removable shield during the endoscopic retrograde cholangiopancreatography (ERCP) procedure. We measured the dose rate of scattered X-rays around endoscopists with and without this removable shield and surveyed the occupational doses to the ERCP staff. We also examined the endurance of the shield. The removable shield reduced the dose rate of scattered X-rays to one-tenth and reduced the monthly dose to an endoscopist by at least two-fifths. For 2.5 years, there was no damage to the shield and no loosening of the seam. The bonding of the hook-and-loop fasteners did not weaken, although the powerful double-sided tapes made especially for plastic did. The removable shield can reduce radiation exposure to the ERCP staff and may contribute to reducing the exposure to the eye lenses of operators. It would also be possible to expand its use to other fluoroscopically guided procedures besides ERCP because it is a light, simple, and useful device. • We invented a shield that can be removed from C-arm units with one hand. • The removable shield reduces the dose rate of X-rays to one-tenth. • The removable shield reduces operator exposure by two-fifths. • The removable shield is durable, lasting for several years. • The drape-like removable shield is light, simple, and useful.
Rana, R; Jain, A; Shankar, A; Bednarek, D R; Rudin, S
2016-02-27
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 × 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, anti-scatter grid artifacts can be corrected, even during dynamic sequences.
Rana, R.; Jain, A.; Shankar, A.; Bednarek, D.R.; Rudin, S.
2017-01-01
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 × 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, anti-scatter grid artifacts can be corrected, even during dynamic sequences. PMID:28649162
Yoon, Yongsu; Morishita, Junji; Park, MinSeok; Kim, Hyunji; Kim, Kihyun; Kim, Jungmin
2016-01-01
The purpose of this study is to investigate the feasibility of a novel indirect flat panel detector (FPD) system for removing scatter radiation. The substrate layer of our FPD system has a Pb net-like structure that matches the ineffective area and blocks the scatter radiation such that only primary X-rays reach the effective area on a thin-film transistor. To evaluate the performance of the proposed system, we used Monte Carlo simulations to derive the scatter fraction and contrast. The scatter fraction of the proposed system is lower than that of a parallel grid system, and the contrast is superior to that of a system without a grid. If the structure of the proposed FPD system is optimized with respect to the specifications of a specific detector, the purpose of the examination, and the energy range used, the FPD can be useful in diagnostic radiology.
NASA Astrophysics Data System (ADS)
Rana, R.; Jain, A.; Shankar, A.; Bednarek, D. R.; Rudin, S.
2016-03-01
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 x 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, antiscatter grid artifacts can be corrected, even during dynamic sequences.
Radiation properties and emissivity parameterization of high level thin clouds
NASA Technical Reports Server (NTRS)
Wu, M.-L. C.
1984-01-01
To parameterize emissivity of clouds at 11 microns, a study has been made in an effort to understand the radiation field of thin clouds. The contributions to the intensity and flux from different sources and through different physical processes are calculated by using the method of successive orders of scattering. The effective emissivity of thin clouds is decomposed into the effective absorption emissivity, effective scattering emissivity, and effective reflection emissivity. The effective absorption emissivity depends on the absorption and emission of the cloud; it is parameterized in terms of optical thickness. The effective scattering emissivity depends on the scattering properties of the cloud; it is parameterized in terms of optical thickness and single scattering albedo. The effective reflection emissivity follows the similarity relation as in the near infrared cases. This is parameterized in terms of the similarity parameter and optical thickness, as well as the temperature difference between the cloud and ground.
Roy, James R; Sun, Philip; Ison, Glenn; Prasan, Ananth M; Ford, Tom; Hopkins, Andrew; Ramsay, David R; Weaver, James C
2017-06-01
Objectives The aim of this study was to quantify the radiation dose reduction during coronary angiography and percutaneous coronary intervention (PCI) through removal of the anti-scatter grid (ASG), and to assess its impact on image quality in adult patients with a low body mass index (BMI). Methods A phantom with different thicknesses of acrylic was used with a Westmead Test Object to simulate patient sizes and assess image quality. 129 low BMI patients underwent coronary angiography or PCI with or without the ASG in situ. Radiation dose was compared between both patient groups. Results With the same imaging system and a comparable patient population, ASG removal was associated with a 47% reduction in total dose-area product (DAP) (p < 0.001). Peak skin dose was reduced by 54% (p < 0.001). Operator scatter was reduced to a similar degree and was significantly reduced through removal of the ASG. Using an image quality phantom it was demonstrated that image quality remained satisfactory. Conclusions Removal of the ASG is a simple and effective method to significantly reduce radiation dose in coronary angiography and PCI. This was achieved while maintaining adequate diagnostic image quality. Selective removal of the ASG is likely to improve the radiation safety of cardiac angiography and interventions.
Werner, Liliana; Morris, Caleb; Liu, Erica; Stallings, Shannon; Floyd, Anne; Ollerton, Andrew; Leishman, Lisa; Bodnar, Zachary
2014-01-01
To assess the potential effect of surface light scattering on light transmittance of 1-piece hydrophobic acrylic intraocular lenses (IOLs) with or without a blue-light filter. John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. Experimental study. Intraocular lenses were obtained from human cadavers (49 IOLs total; 36 with blue-light filter) and from finished-goods inventory (controls). The IOLs were removed from cadaver eyes and the power and model matched to unused controls. After surface proteins were removed, the IOLs were hydrated for 24 hours at room temperature. Surface light scattering was measured with a Scheimpflug camera (EAS-1000 Anterior Segment Analysis System). Light transmittance was measured with a Lambda 35 UV/Vis spectrophotometer (single-beam configuration; RSA-PE-20 integrating sphere). Hydrated scatter values ranged from 4.8 to 202.5 computer-compatible tape (CCT) units for explanted IOLs with blue-light filter and 1.5 to 11.8 CCT units for controls; values ranged from 6.0 to 137.5 CCT units for explanted IOLs without a blue-light filter and 3.5 to 9.6 CCT units for controls. In both groups, there was a tendency toward increasing scatter values with increasing postoperative time. No differences in light transmittance were observed between explanted IOLs and controls in both groups (IOLs with blue-light filter: P=.407; IOL with no blue-light filter: P=.487; both paired t test). Although surface light scattering of explanted IOLs was significantly higher than that of controls and appeared to increase with time, no effect was observed on light transmittance of 1-piece hydrophobic acrylic IOLs with or without a blue-light filter. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Parameterization of the nuclear Hulthén potentials
Laha, U. Bhoi, J.
2016-01-15
Within the formalism of supersymmetry-inspired factorization method, a two-term nuclear Hulthén potential has been developed and parameterized to reproduce the nucleon–nucleon scattering phase shifts for P and D partial wave states.
Thomas, P J; Midgley, P A
2001-08-01
The increased spectral information obtained by acquiring an EFTEM image-series over several hundred eV allows plural scattering to be removed from loss images using standard deconvolution techniques developed for the quantification of EEL spectra. In this work, both Fourier-log and Fourier-ratio deconvolution techniques have been applied successfully to such image-series. Application of the Fourier-log technique over an energy-loss range of several hundred eV has been achieved by implementation of a novel method that extends the effective dynamic range of EFTEM image-series acquisition by over four orders of magnitude. Experimental results show that the removal of plural scattering from EFTEM image-series gives a significant improvement in quantification for thicker specimen regions. Further, the recovery of the single-scattering distribution using the Fourier-log technique over an extended energy-loss range is shown to result in an increase in both the ionisation-edge jump-ratio and the signal-to-noise ratio.
Parameterizing turbulence over abrupt topography
NASA Astrophysics Data System (ADS)
Klymak, Jody
2016-11-01
Stratified flow over abrupt topography generates a spectrum of propagating internal waves at large scales, and non-linear overturning breaking waves at small scales. For oscillating flows, the large scale waves propagate away as internal tides, for steady flows the large-scale waves propagate away as standing "columnar modes". At small-scales, the breaking waves appear to be similar for either oscillating or steady flows, so long as in the oscillating case the topography is significantly steeper than the internal tide angle of propagation. The size and energy lost to the breaking waves can be predicted relatively well from assuming that internal modes that propagate horizontally more slowly than the barotropic internal tide speed are arrested and their energy goes to turbulence. This leads to a recipe for dissipation of internal tides at abrupt topography that is quite robust for both the local internal tide generation problem (barotropic forcing) and for the scattering problem (internal tides incident on abrupt topography). Limitations arise when linear generation models break down, an example of which is interference between two ridges. A single "super-critical" ridge is well-modeled by a single knife-edge topography, regardless of its actual shape, but two supercritical ridges in close proximity demonstrate interference of the high modes that makes knife-edfe approximations invalid. Future direction of this research will be to use more complicated linear models to estimate the local dissipation. Of course, despite the large local dissipation, many ridges radiate most of their energy into the deep ocean, so tracking this low-mode radiated energy is very important, particularly as it means dissipation parameterizations in the open ocean due to these sinks from the surface tide cannot be parameterized locally to where they are lost from the surface tide, but instead lead to non-local parameterizations. US Office of Naval Research; Canadian National Science and
Satellite-Based Model Parameterization of Diabatic Heating
NASA Technical Reports Server (NTRS)
Pielke, Roger, Sr.; Stokowski, David; Wang, Jih-Wang; Vukicevic, Tomislava; Leoncini, Giovanni; Matsui, Toshihisa; Castro, Christopher L.; Niyogi, Dev; Kishtawal, Chandra M.; Biazar, Arastoo; Doty, Kevin; McNider, Richard T.; Nair, Udaysankar; Tao, Wei-Kuo
2007-01-01
Future meteorological satellites are expected to provide much needed fine-scale information that can improve the accuracy of weather and climate models. As one application of this improved capability, we introduce the concept of a generalized parameterization framework using satellite datasets that will increase the accuracy and the computational efficiency of weather and climate modeling. In an atmospheric model, several different parameterizations usually are used to reproduce the various physical processes. However, it is generally unrealistic to separate the processes in this artificial way since the observations and physics make no such artificial separation. Thus, we propose a new unified parameterization framework to remove the unrealistic separation between parameterizations.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Yoon, Y; Park, M; Kim, H; Kim, K; Kim, J; Morishita, J
2015-06-15
Purpose: This study aims to identify the feasibility of a novel cesium-iodine (CsI)-based flat-panel detector (FPD) for removing scatter radiation in diagnostic radiology. Methods: The indirect FPD comprises three layers: a substrate, scintillation, and thin-film-transistor (TFT) layer. The TFT layer has a matrix structure with pixels. There are ineffective dimensions on the TFT layer, such as the voltage and data lines; therefore, we devised a new FPD system having net-like lead in the substrate layer, matching the ineffective area, to block the scatter radiation so that only primary X-rays could reach the effective dimension.To evaluate the performance of this new FPD system, we conducted a Monte Carlo simulation using MCNPX 2.6.0 software. Scatter fractions (SFs) were acquired using no grid, a parallel grid (8:1 grid ratio), and the new system, and the performances were compared.Two systems having different thicknesses of lead in the substrate layer—10 and 20μm—were simulated. Additionally, we examined the effects of different pixel sizes (153×153 and 163×163μm) on the image quality, while keeping the effective area of pixels constant (143×143μm). Results: In case of 10μm lead, the SFs of the new system (∼11%) were lower than those of the other system (∼27% with no grid, ∼16% with parallel grid) at 40kV. However, as the tube voltage increased, the SF of new system (∼19%) was higher than that of parallel grid (∼18%) at 120kV. In the case of 20μm lead, the SFs of the new system were lower than those of the other systems at all ranges of the tube voltage (40–120kV). Conclusion: The novel CsI-based FPD system for removing scatter radiation is feasible for improving the image contrast but must be optimized with respect to the lead thickness, considering the system’s purposes and the ranges of the tube voltage in diagnostic radiology. This study was supported by a grant(K1422651) from Institute of Health Science, Korea University.
Dynamic Parameterization of IPSEC
2001-12-01
EXPECTED BENFITS OF THE RESEARCH ............................................2 D. RESEARCH OBJECTIVES...182 3. Explore Proposal Caching Issues ...................................................182 4. Security Policy Editor...C. EXPECTED BENFITS OF THE RESEARCH By providing dynamic parameterization to IPsec, government and military 3 security systems will be able to
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Advanced Surface Flux Parameterization
2001-09-30
within PE 0602435N are BE-35-2-18, for the Mesoscale Modeling of the Atmos- phere and Aerosols, BE-35-2-19, and for the Exploratory Data Assimilation ... Methods . Related project at NPS is N0001401WR20242 for Evaluating Surface Flux and Boundary Layer Parameterizations in Mesoscale Models Using
Lightweight Parameterized Suffix Array Construction
NASA Astrophysics Data System (ADS)
Tomohiro, I.; Deguchi, Satoshi; Bannai, Hideo; Inenaga, Shunsuke; Takeda, Masayuki
We present a first algorithm for direct construction of parameterized suffix arrays and parameterized longest common prefix arrays for non-binary strings. Experimental results show that our algorithm is much faster than naïve methods.
Parameterization of the scavenging coefficient for particle scavenging by drops
NASA Astrophysics Data System (ADS)
Fredericks, Steven; Saylor, J. R.
2014-11-01
The removal of particles by drops occurs in many environmentally relevant scenarios such as particle fallout from rain, as well as in many industrial applications such as sprays for dust control in mines. In applications like these the ability of a drop to scavenge a particle is quantified by the scavenging coefficient, E, which is the fraction of particles removed. Though the physics controlling particle scavenging by drops suggests that E is controlled by several dimensionless groups, E is typically correlated to just the Stokes number. A survey of published experimental data shows significant scatter in plots of E versus the Stokes number, occasionally exceeding three orders of magnitude. There is also a large discrepancy between the published theories for E. A parameterization study was conducted to ascertain if and how inclusion of other dimensionless groups could better collapse the extant data for E and the results of that study are presented in this talk. Brief mention will also be made of recent experiments by the authors where E was measured for a liquid drop suspended in an ultrasonic standing wave field, where the drop diameter and gas velocity can be independently varied unlike the more typical experiments where these quantities are coupled.
[Characteristics and Parameterization for Atmospheric Extinction Coefficient in Beijing].
Chen, Yi-na; Zhao, Pu-sheng; He, Di; Dong, Fan; Zhao, Xiu-juan; Zhang, Xiao-ling
2015-10-01
In order to study the characteristics of atmospheric extinction coefficient in Beijing, systematic measurements had been carried out for atmospheric visibility, PM2.5 concentration, scattering coefficient, black carbon, reactive gases, and meteorological parameters from 2013 to 2014. Based on these data, we compared some published fitting schemes of aerosol light scattering enhancement factor [ f(RH)], and discussed the characteristics and the key influence factors for atmospheric extinction coefficient. Then a set of parameterization models of atmospheric extinction coefficient for different seasons and different polluted levels had been established. The results showed that aerosol scattering accounted for more than 94% of total light extinction. In the summer and autumn, the aerosol hygroscopic growth caused by high relative humidity had increased the aerosol scattering coefficient by 70 to 80 percent. The parameterization models could reflect the influencing mechanism of aerosol and relative humidity upon ambient light extinction, and describe the seasonal variations of aerosol light extinction ability.
Parameterization of solar cells
NASA Technical Reports Server (NTRS)
Appelbaum, J.; Chait, A.; Thompson, D.
1992-01-01
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. An alternative approach for cell performance prediction and cell screening is provided by modeling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters, or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening of solar cells for arrays. We show that common curve fitting techniques, e.g., least squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve fitting criteria alone cannot be directly used for cell parameterization. We propose a consistent procedure which takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. The procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
Assessment of Mixed Layer Mesoscale Parameterization in Eddy Resolving Simulations.
NASA Astrophysics Data System (ADS)
Clayson, C. A.; Luneva, M. V.; Dubovikov, M. S.
2014-12-01
In eddy resolving simulations we test a mixed layer mesoscale parameterization, developed recently by Canuto and Dubovikov (2011). The parameterization yields the horizontal and vertical mesoscale fluxes in terms of coarse-resolution fields and eddy kinetic energy. An expression for the later in terms of mean fields has been found too to get a closed parameterization in terms of the mean fields only. In 40 numerical experiments we simulated the two types of flows: idealized flows driven by baroclinic instabilities only, and more realistic flows, driven by wind and surface fluxes as well as by inflow-outflow in shallow and narrow straits. The diagnosed quasi-instantaneous horizontal and vertical mesoscale buoyancy fluxes (averaged over 1o - 2o and 10 days) demonstrate a strong scatter typical for turbulent flows, however, the fluxes are highly correlated with the parameterization. After averaged over 3-4 months, diffusivities diagnosed from the eddy resolving simulations, are quite consistent with the parameterization for a broad range of parameters. Diagnosed vertical mesoscale fluxes restratify mixed layer and are in a good agreement with the parameterization unless vertical turbulent mixing in the upper layer becomes strong enough to compare with mesoscale advection. In the later case, numerical simulations demonstrate that the deviation of the fluxes from the parameterization is controlled by the dimensionless parameter γ, estimating the ratio of vertical diffusion term to a mesoscale advection. The empirical dependence of vertical flux on γ is found. An analysis using a modified omega-equation reveals that the effects of the vertical mixing of vorticity is responsible for the two-three fold amplification of vertical mesoscale flux. Possible physical mechanisms, responsible for the amplification of vertical mesoscale flux are discussed.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.
1989-01-01
The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
Summary of Cumulus Parameterization Workshop
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh
2002-01-01
A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.
Parameterization of photon beam dosimetry for a linear accelerator.
Lebron, Sharon; Lu, Bo; Yan, Guanghua; Kahler, Darren; Li, Jonathan G; Barraclough, Brendan; Liu, Chihray
2016-02-01
In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator's (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, including percentage depth doses (PDDs), profiles, and total scatter output factors (S(cp)). S(cp), PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S(cp) data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model's parameters were determined using the minimal amount of measured data necessary. The model's accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. All of the differences in the PDDs' buildup and the profiles' penumbra regions were less than 2 and 0
Parameterization of photon beam dosimetry for a linear accelerator
Lebron, Sharon; Barraclough, Brendan; Lu, Bo; Yan, Guanghua; Kahler, Darren; Li, Jonathan G.; Liu, Chihray
2016-02-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, including percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup and the
NASA Astrophysics Data System (ADS)
Roh, Y. H.; Yoon, Y.; Kim, K.; Kim, J.; Kim, J.; Morishita, J.
2016-10-01
Scattered radiation is the main reason for the degradation of image quality and the increased patient exposure dose in diagnostic radiology. In an effort to reduce scattered radiation, a novel structure of an indirect flat panel detector has been proposed. In this study, a performance evaluation of the novel system in terms of image contrast as well as an estimation of the number of photons incident on the detector and the grid exposure factor were conducted using Monte Carlo simulations. The image contrast of the proposed system was superior to that of the no-grid system but slightly inferior to that of the parallel-grid system. The number of photons incident on the detector and the grid exposure factor of the novel system were higher than those of the parallel-grid system but lower than those of the no-grid system. The proposed system exhibited the potential for reduced exposure dose without image quality degradation; additionally, can be further improved by a structural optimization considering the manufacturer's specifications of its lead contents.
Parameterizing Plasmaspheric Hiss Wave Power by Plasmapause Location
NASA Astrophysics Data System (ADS)
Malaspina, D.; Jaynes, A. N.; Boule, C.; Bortnik, J.; Thaller, S. A.; Ergun, R.; Kletzing, C.; Wygant, J. R.
2016-12-01
Plasmaspheric hiss is a superposition of electromagnetic whistler-mode waves largely confined within the plasmasphere, the cold plasma torus surrounding Earth. Hiss plays an important role in radiation belt dynamics by pitch angle scattering electrons for a wide range of electron energies (10's of keV to > 1 MeV) which can result in their loss to the atmosphere. This interaction is often included in predictive models of radiation belt dynamics using statistical hiss wave power distributions derived from observations. However, the traditional approach to creating these distributions parameterizes hiss power by L-parameter (e.g. MacIlwain L, dipole L, or L*) and a geomagnetic index (e.g. DST or AE). Such parameterization introduces spatial averaging of dissimilar wave power radial profiles, resulting in heavily smoothed wave power distributions. This work instead parameterizes hiss wave power distributions using plasmapause location and distance from the plasmapause. Using Van Allen Probes data and these new parameterizations, previously unreported and highly repeatable features of the hiss wave power distribution become apparent. These features include: (1) The highest amplitude hiss wave power is concentrated over a narrower range of L than previous studies have indicated, and (2) the location of the peak in hiss wave power is determined by the plasmapause location, occurring at a consistent standoff distance Earthward of the plasmapause. Based on these features, parameterizing hiss using the plasmapause location and distance from the plasmapause may shed new light on hiss generation and propagation physics, as well as serve to improve the parameterization of hiss in predictive models of the radiation belts.
Haag's Theorem and Parameterized Quantum Field Theory
NASA Astrophysics Data System (ADS)
Seidewitz, Edwin
2017-01-01
``Haag's theorem is very inconvenient; it means that the interaction picture exists only if there is no interaction''. In traditional quantum field theory (QFT), Haag's theorem states that any field unitarily equivalent to a free field must itself be a free field. But the derivation of the Dyson series perturbation expansion relies on the use of the interaction picture, in which the interacting field is unitarily equivalent to the free field, but which must still account for interactions. So, the usual derivation of the scattering matrix in QFT is mathematically ill defined. Nevertheless, perturbative QFT is currently the only practical approach for addressing realistic scattering, and it has been very successful in making empirical predictions. This success can be understood through an alternative derivation of the Dyson series in a covariant formulation of QFT using an invariant, fifth path parameter in addition to the usual four position parameters. The parameterization provides an additional degree of freedom that allows Haag's Theorem to be avoided, permitting the consistent use of a form of interaction picture in deriving the Dyson expansion. The extra symmetry so introduced is then broken by the choice of an interacting vacuum.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Buck, Warren W.; Maung, Khin M.
1989-01-01
Two kinds of number density distributions of the nucleus, harmonic well and Woods-Saxon models, are used with the t-matrix that is taken from the scattering experiments to find a simple optical potential. The parameterized two body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to imaginary part of the forward elastic scattering amplitude, are shown. The eikonal approximation was chosen as the solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
NASA Astrophysics Data System (ADS)
Casamayou-Boucau, Yannick; Ryder, Alan G.
2017-09-01
Anisotropy resolved multidimensional emission spectroscopy (ARMES) provides valuable insights into multi-fluorophore proteins (Groza et al 2015 Anal. Chim. Acta 886 133-42). Fluorescence anisotropy adds to the multidimensional fluorescence dataset information about the physical size of the fluorophores and/or the rigidity of the surrounding micro-environment. The first ARMES studies used standard thin film polarizers (TFP) that had negligible transmission between 250 and 290 nm, preventing accurate measurement of intrinsic protein fluorescence from tyrosine and tryptophan. Replacing TFP with pairs of broadband wire grid polarizers enabled standard fluorescence spectrometers to accurately measure anisotropies between 250 and 300 nm, which was validated with solutions of perylene in the UV and Erythrosin B and Phloxine B in the visible. In all cases, anisotropies were accurate to better than ±1% when compared to literature measurements made with Glan Thompson or TFP polarizers. Better dual wire grid polarizer UV transmittance and the use of excitation-emission matrix measurements for ARMES required complete Rayleigh scatter elimination. This was achieved by chemometric modelling rather than classical interpolation, which enabled the acquisition of pure anisotropy patterns over wider spectral ranges. In combination, these three improvements permit the accurate implementation of ARMES for studying intrinsic protein fluorescence.
Parameterized post-Newtonian cosmology
NASA Astrophysics Data System (ADS)
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
NASA Astrophysics Data System (ADS)
Smith, Helen R.; Baran, Anthony J.; Hesse, Evelyn; Hill, Peter G.; Connolly, Paul J.; Webb, Ann
2016-11-01
A single habit parameterization for the shortwave optical properties of cirrus is presented. The parameterization utilizes a hollow particle geometry, with stepped internal cavities as identified in laboratory and field studies. This particular habit was chosen as both experimental and theoretical results show that the particle exhibits lower asymmetry parameters when compared to solid crystals of the same aspect ratio. The aspect ratio of the particle was varied as a function of maximum dimension, D, in order to adhere to the same physical relationships assumed in the microphysical scheme in a configuration of the Met Office atmosphere-only global model, concerning particle mass, size and effective density. Single scattering properties were then computed using T-Matrix, Ray Tracing with Diffraction on Facets (RTDF) and Ray Tracing (RT) for small, medium, and large size parameters respectively. The scattering properties were integrated over 28 particle size distributions as used in the microphysical scheme. The fits were then parameterized as simple functions of Ice Water Content (IWC) for 6 shortwave bands. The parameterization was implemented into the GA6 configuration of the Met Office Unified Model along with the current operational long-wave parameterization. The GA6 configuration is used to simulate the annual twenty-year short-wave (SW) fluxes at top-of-atmosphere (TOA) and also the temperature and humidity structure of the atmosphere. The parameterization presented here is compared against the current operational model and a more recent habit mixture model.
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Infrared radiation parameterizations in numerical climate models
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Kratz, David P.; Ridgway, William
1991-01-01
This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.
Spherical Parameterization Balancing Angle and Area Distortions.
Nadeem, Saad; Su, Zhengyu; Zeng, Wei; Kaufman, Arie; Gu, Xianfeng
2017-06-01
This work presents a novel framework for spherical mesh parameterization. An efficient angle-preserving spherical parameterization algorithm is introduced, which is based on dynamic Yamabe flow and the conformal welding method with solid theoretic foundation. An area-preserving spherical parameterization is also discussed, which is based on discrete optimal mass transport theory. Furthermore, a spherical parameterization algorithm, which is based on the polar decomposition method, balancing angle distortion and area distortion is presented. The algorithms are tested on 3D geometric data and the experiments demonstrate the efficiency and efficacy of the proposed methods.
Depth Edge Filtering Using Parameterized Structured Light Imaging
Zheng, Ziqi; Bae, Seho; Yi, Juneho
2017-01-01
This research features parameterized depth edge detection using structured light imaging that exploits a single color stripes pattern and an associated binary stripes pattern. By parameterized depth edge detection, we refer to the detection of all depth edges in a given range of distances with depth difference greater or equal to a specific value. While previous research has not properly dealt with shadow regions, which result in double edges, we effectively remove shadow regions using statistical learning through effective identification of color stripes in the structured light images. We also provide a much simpler control of involved parameters. We have compared the depth edge filtering performance of our method with that of the state-of-the-art method and depth edge detection from the Kinect depth map. Experimental results clearly show that our method finds the desired depth edges most correctly while the other methods cannot. PMID:28368350
Depth Edge Filtering Using Parameterized Structured Light Imaging.
Zheng, Ziqi; Bae, Seho; Yi, Juneho
2017-04-03
This research features parameterized depth edge detection using structured light imaging that exploits a single color stripes pattern and an associated binary stripes pattern. By parameterized depth edge detection, we refer to the detection of all depth edges in a given range of distances with depth difference greater or equal to a specific value. While previous research has not properly dealt with shadow regions, which result in double edges, we effectively remove shadow regions using statistical learning through effective identification of color stripes in the structured light images. We also provide a much simpler control of involved parameters. We have compared the depth edge filtering performance of our method with that of the state-of-the-art method and depth edge detection from the Kinect depth map. Experimental results clearly show that our method finds the desired depth edges most correctly while the other methods cannot.
Parameterization of precipitating shallow convection
NASA Astrophysics Data System (ADS)
Seifert, Axel
2015-04-01
Shallow convective clouds play a decisive role in many regimes of the atmosphere. They are abundant in the trade wind regions and essential for the radiation budget in the sub-tropics. They are also an integral part of the diurnal cycle of convection over land leading to the formation of deeper modes of convection later on. Errors in the representation of these small and seemingly unimportant clouds can lead to misforecasts in many situations. Especially for high-resolution NWP models at 1-3 km grid spacing which explicitly simulate deeper modes of convection, the parameterization of the sub-grid shallow convection is an important issue. Large-eddy simulations (LES) can provide the data to study shallow convective clouds and their interaction with the boundary layer in great detail. In contrast to observation, simulations provide a complete and consistent dataset, which may not be perfectly realistic due to the necessary simplifications, but nevertheless enables us to study many aspects of those clouds in a self-consistent way. Today's supercomputing capabilities make it possible to use domain sizes that not only span several NWP grid boxes, but also allow for mesoscale self-organization of the cloud field, which is an essential behavior of precipitating shallow convection. By coarse-graining the LES data to the grid of an NWP model, the sub-grid fluctuations caused by shallow convective clouds can be analyzed explicitly. These fluctuations can then be parameterized in terms of a PDF-based closure. The necessary choices for such schemes like the shape of the PDF, the number of predicted moments, etc., will be discussed. For example, it is shown that a universal three-parameter distribution of total water may exist at scales of O(1 km) but not at O(10 km). In a next step the variance budgets of moisture and temperature in the cloud-topped boundary layer are studied. What is the role and magnitude of the microphysical correlation terms in these equations, which
Parameterization of solar flare dose
Lamarche, A.H.; Poston, J.W.
1996-12-31
A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP).
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
Control of Shortwave Radiation Parameterization on Tropical Climate Simulation
NASA Astrophysics Data System (ADS)
Crétat, J.; Masson, S. G.; Berthet, S.; Samson, G.; Terray, P.; Dudhia, J.; Pinsard, F.; Hourdin, C.
2015-12-01
SST-forced tropical-channel simulations are used to quantify the control of shortwave (SW) parameterization on the mean tropical climate compared to other major model settings (convection, boundary layer turbulence, vertical and horizontal resolutions). The physical mechanisms whereby this control manifests are explored by the means of a large set of simulations with two widely used SW schemes. Analyses focus on the spatial distribution and magnitude of the net SW radiation budget at the surface (SWnet_SFC), latent heat fluxes, and rainfall at the annual timescale. The model skill and sensitivity to the settings tested are quantified relative to observations and reanalyses and using an ensemble approach. Model skill is mainly controlled by SW parameterization, especially the magnitude of SWnet_SFC and rainfall and both the spatial distribution and magnitude of latent heat fluxes over ocean. On the other hand, the spatial distribution of continental rainfall (SWnet_SFC) is mainly influenced by convection parameterization and horizontal resolution (boundary layer parameterization and orography). Physical understanding of both the control of SW parameterization and sensitivity to SW schemes is addressed by analyzing the thermal structure of the atmosphere and conducting sensitivity experiments to O3 absorption and SW scattering coefficient. SW parameterization shapes the stability of the atmosphere in two different ways according to whether surface is coupled to atmosphere or not, while O3 absorption has minor effects in our simulations. Over SST-prescribed regions, increasing the amount of SW absorption warms the atmosphere only because surface temperatures are fixed, resulting in increased atmospheric stability. Over surface-atmosphere coupled regions (i.e., land points in our simulations), increasing SW absorption warms both atmospheric and surface temperatures, leading to a shift towards a warmer state and a more intense hydrological cycle. This turns in reversal
Visibility Parameterization For Forecasting Model Applications
NASA Astrophysics Data System (ADS)
Gultepe, I.; Milbrandt, J.; Binbin, Z.
2010-07-01
In this study, the visibility parameterizations developed during Fog Remote Sensing And Modeling (FRAM) projects, conducted in central and eastern Canada, will be summarized and their use for forecasting/nowcasting applications will be discussed. Parameterizations developed for reductions in visibility due to 1) fog, 2) rain, 3) snow, and 4) relative humidity (RH) during FRAM will be given and uncertainties in the parameterizations will be discussed. Comparisons made between Canadian GEM NWP model (with 1 and 2.5 km horizontal grid spacing) and observations collected during the Science of Nowcasting Winter Weather for Vancouver 2010 (SNOW-V10) project and FRAM projects, using the new parameterizations, will be given Observations used in this study were obtained using a fog measuring device (FMD) for fog parameterization, a Vaisala all weather precipitation sensor called FD12P for rain and snow parameterizations and visibility measurements, and a total precipitation sensor (TPS), and distrometers called OTT ParSiVel and Laser Precipitation Measurement (LPM) for rain/snow particle spectra. The results from the three SNOW-V10 sites suggested that visibility values given by the GEM model using the new parameterizations were comparable with observed visibility values when model based input parameters such as liquid water content, RH, and precipitation rate for visibility parameterizations were predicted accurately.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-12-31
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-01-01
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
A Two-Habit Ice Cloud Optical Property Parameterization for GCM Application
NASA Technical Reports Server (NTRS)
Yi, Bingqi; Yang, Ping; Minnis, Patrick; Loeb, Norman; Kato, Seiji
2014-01-01
We present a novel ice cloud optical property parameterization based on a two-habit ice cloud model that has been proved to be optimal for remote sensing applications. The two-habit ice model is developed with state-of-the-art numerical methods for light scattering property calculations involving individual columns and column aggregates with the habit fractions constrained by in-situ measurements from various field campaigns. Band-averaged bulk ice cloud optical properties including the single-scattering albedo, the mass extinction/absorption coefficients, and the asymmetry factor are parameterized as functions of the effective particle diameter for the spectral bands involved in the broadband radiative transfer models. Compared with other parameterization schemes, the two-habit scheme generally has lower asymmetry factor values (around 0.75 at the visible wavelengths). The two-habit parameterization scheme was widely tested with the broadband radiative transfer models (i.e. Rapid Radiative Transfer Model, GCM version) and global circulation models (GCMs, i.e. Community Atmosphere Model, version 5). Global ice cloud radiative effects at the top of the atmosphere are also analyzed from the GCM simulation using the two-habit parameterization scheme in comparison with CERES satellite observations.
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.
Conformal Surface Parameterization for Texture Mapping
1999-03-25
Conformal Surface Parameterization for Texture Mapping Steven Haker Department of Electrical and Computer Engineering University of Minnesota...also like to thank Professor Victoria Interrante for some very helpful conversations on texture mappings. References [1] S. Angenent, S. Haker , A
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the
Multiple parameterization for hydraulic conductivity identification.
Tsai, Frank T-C; Li, Xiaobao
2008-01-01
Hydraulic conductivity identification remains a challenging inverse problem in ground water modeling because of the inherent nonuniqueness and lack of flexibility in parameterization methods. This study introduces maximum weighted log-likelihood estimation (MWLLE) along with multiple generalized parameterization (GP) methods to identify hydraulic conductivity and to address nonuniqueness and inflexibility problems in parameterization. A scaling factor for information criteria is suggested to obtain reasonable weights of parameterization methods for the MWLLE and model averaging method. The scaling factor is a statistical parameter relating to a desired significance level in Occam's window and the variance of the chi-squares distribution of the fitting error. Through model averaging with multiple GP methods, the conditional estimate of hydraulic conductivity and its total conditional covariances are calculated. A numerical example illustrates the issue arising from Occam's window in estimating model weights and shows the usefulness of the scaling factor to obtain reasonable model weights. Moreover, the numerical example demonstrates the advantage of using multiple GP methods over the zonation and interpolation methods because GP provides better models in the model averaging method. The methodology is applied to the Alamitos Gap area, California, to identify the hydraulic conductivity field. The results show that the use of the scaling factor is necessary in order to incorporate good parameterization methods and to avoid a dominant parameterization method.
Brain surface conformal parameterization with algebraic functions.
Wang, Yalin; Gu, Xianfeng; Chan, Tony F; Thompson, Paul M; Yau, Shing-Tung
2006-01-01
In medical imaging, parameterized 3D surface models are of great interest for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on algebraic functions. By solving the Yamabe equation with the Ricci flow method, we can conformally map a brain surface to a multi-hole disk. The resulting parameterizations do not have any singularities and are intrinsic and stable. To illustrate the technique, we computed parameterizations of several types of anatomical surfaces in MRI scans of the brain, including the hippocampi and the cerebral cortices with various landmark curves labeled. For the cerebral cortical surfaces, we show the parameterization results are consistent with selected landmark curves and can be matched to each other using constrained harmonic maps. Unlike previous planar conformal parameterization methods, our algorithm does not introduce any singularity points. It also offers a method to explicitly match landmark curves between anatomical surfaces such as the cortex, and to compute conformal invariants for statistical comparisons of anatomy.
The parameterization of microchannel-plate-based detection systems
NASA Astrophysics Data System (ADS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.
2016-10-01
The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.
Avoiding Haag's Theorem with Parameterized Quantum Field Theory
NASA Astrophysics Data System (ADS)
Seidewitz, Ed
2017-03-01
Under the normal assumptions of quantum field theory, Haag's theorem states that any field unitarily equivalent to a free field must itself be a free field. Unfortunately, the derivation of the Dyson series perturbation expansion relies on the use of the interaction picture, in which the interacting field is unitarily equivalent to the free field but must still account for interactions. Thus, the traditional perturbative derivation of the scattering matrix in quantum field theory is mathematically ill defined. Nevertheless, perturbative quantum field theory is currently the only practical approach for addressing scattering for realistic interactions, and it has been spectacularly successful in making empirical predictions. This paper explains this success by showing that Haag's Theorem can be avoided when quantum field theory is formulated using an invariant, fifth path parameter in addition to the usual four position parameters, such that the Dyson perturbation expansion for the scattering matrix can still be reproduced. As a result, the parameterized formalism provides a consistent foundation for the interpretation of quantum field theory as used in practice and, perhaps, for better dealing with other mathematical issues.
Parameterization of continental boundary layer clouds
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhao, Wei
2008-05-01
Large eddy simulations (LESs) of continental boundary layer clouds (BLCs) observed at the southern Great Plains (SGP) are used to study issues associated with the parameterization of sub-grid BLCs in large scale models. It is found that liquid water potential temperature θl and total specific humidity qt, which are often used as parameterization predictors in statistical cloud schemes, do not share the same probability distribution in the cloud layer with θl skewed to the left (negatively skewed) and qt skewed to the right (positively skewed). The skewness and kurtosis change substantially in time and space when the development of continental BLCs undergoes a distinct diurnal variation. The wide range of skewness and kurtosis of θl and qt can hardly be described by a single probability distribution function. To extend the application of the statistical cloud parameterization approach, this paper proposes an innovative cloud parameterization scheme that uses the boundary layer height and the lifting condensation level as the primary parameterization predictors. The LES results indicate that the probability distribution of these two quantities is relatively stable compared with that of θl and qt during the diurnal variation and nearly follows a Gaussian function. Verifications using LES output and the observations collected at the Atmospheric Radiation Measurement (ARM) Climate Research Facility (ARCF) SGP site indicate that the proposed scheme works well to represent continental BLCs.
Brain surface parameterization using Riemann surface structure.
Wang, Yalin; Gu, Xianfeng; Hayashi, Kiralee M; Chan, Tony F; Thompson, Paul M; Yau, Shing-Tung
2005-01-01
We develop a general approach that uses holomorphic 1-forms to parameterize anatomical surfaces with complex (possibly branching) topology. Rather than evolve the surface geometry to a plane or sphere, we instead use the fact that all orientable surfaces are Riemann surfaces and admit conformal structures, which induce special curvilinear coordinate systems on the surfaces. Based on Riemann surface structure, we can then canonically partition the surface into patches. Each of these patches can be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable. To illustrate the technique, we computed conformal structures for several types of anatomical surfaces in MRI scans of the brain, including the cortex, hippocampus, and lateral ventricles. We found that the resulting parameterizations were consistent across subjects, even for branching structures such as the ventricles, which are otherwise difficult to parameterize. Compared with other variational approaches based on surface inflation, our technique works on surfaces with arbitrary complexity while guaranteeing minimal distortion in the parameterization. It also offers a way to explicitly match landmark curves in anatomical surfaces such as the cortex, providing a surface-based framework to compare anatomy statistically and to generate grids on surfaces for PDE-based signal processing.
Optical closure of parameterized bio-optical relationships
NASA Astrophysics Data System (ADS)
He, Shuangyan; Fischer, Jürgen; Schaale, Michael; He, Ming-xia
2014-03-01
An optical closure study on bio-optical relationships was carried out using radiative transfer model matrix operator method developed by Freie Universität Berlin. As a case study, the optical closure of bio-optical relationships empirically parameterized with in situ data for the East China Sea was examined. Remote-sensing reflectance ( R rs) was computed from the inherent optical properties predicted by these biooptical relationships and compared with published in situ data. It was found that the simulated R rs was overestimated for turbid water. To achieve optical closure, bio-optical relationships for absorption and scattering coefficients for suspended particulate matter were adjusted. Furthermore, the results show that the Fournier and Forand phase functions obtained from the adjusted relationships perform better than the Petzold phase function. Therefore, before bio-optical relationships are used for a local sea area, the optical closure should be examined.
Rana, R; Bednarek, D; Rudin, S
2016-06-15
Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first for a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.
POET: Parameterized Optimization for Empirical Tuning
Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D
2007-01-29
The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.
Basic Theory Behind Parameterizing Atmospheric Convection
NASA Astrophysics Data System (ADS)
Plant, R. S.; Fuchs, Z.; Yano, J. I.
2014-04-01
Last fall, a network of the European Cooperation in Science and Technology (COST), called "Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models" (COST Action ES0905; see http://w3.cost.esf.org/index.php?id=205&action_number=ES0905), organized a 10-day training course on atmospheric convection and its parameterization. The aim of the workshop, held on the island of Brac, Croatia, was to help young scientists develop an in-depth understanding of the core theory underpinning convection parameterizations. The speakers also sought to impart an appreciation of the various approximations, compromises, and ansatz necessary to translate theory into operational practice for numerical models.
Surfaces with Rational Chord Length Parameterization
NASA Astrophysics Data System (ADS)
Bastl, Bohumír; Jüttler, Bert; Lávička, Miroslav; Šír, Zbyněk
We consider a rational triangular Bézier surface of degree n and study conditions under which it is rationally parameterized by chord lengths (RCL surface) with respect to the reference circle. The distinguishing property of these surfaces is that the ratios of the three distances of a point to the three vertices of an arbitrary triangle inscribed to the reference circle and the ratios of the distances of the parameter point to the three vertices of the corresponding domain triangle are identical. This RCL property, which extends an observation from [10,13] about rational curves parameterized by chord lengths, was firstly observed in the surface case for patches on spheres in [2]. In the present paper, we analyze the entire family of RCL surfaces, provide their general parameterization and thoroughly investigate their properties.
Constraints to Dark Energy Using PADE Parameterizations
NASA Astrophysics Data System (ADS)
Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.
2017-07-01
We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.
Order-Sorted Parameterization and Induction
NASA Astrophysics Data System (ADS)
Meseguer, José
Parameterization is one of the most powerful features to make specifications and declarative programs modular and reusable, and our best hope for scaling up formal verification efforts. This paper studies order-sorted parameterization at three different levels: (i) its mathematical semantics; (ii) its operational semantics by term rewriting; and (iii) the inductive reasoning principles that can soundly be used to prove properties about such specifications. It shows that achieving the desired properties at each of these three levels is a considerably subtler matter than for many-sorted specifications, but that such properties can be attained under reasonable conditions.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode
An Infrared Radiative Transfer Parameterization For A Venus General Circulation Model
NASA Astrophysics Data System (ADS)
Eymet, Vincent; Fournier, R.; Lebonnois, S.; Bullock, M. A.; Dufresne, J.; Hourdin, F.
2006-09-01
A new 3-dimensional General Circulation Model (GCM) of Venus'atmosphere is curently under development at the Laboratoire de Meteorologie Dynamique, in the context of the Venus-Express mission. Special attention was devoted to the parameterization of infrared radiative transfer: this parameterization has to be both very fast and sufficiently accurate in order to provide valid results over extented periods of time. We have developped at the Laboratoire d'Energetique a Monte-Carlo code for computing reference radiative transfer results for optically thick inhomogeneous scattering planetary atmospheres over the IR spectrum. This code (named KARINE) is based on a Net-Exchange Rates formulation, and uses a k-distribution spectral model. The Venus spectral data, that was compiled at the Southwest Research Institute, accounts for gaseous absorption and scattering, typical clouds absorption and scattering, as well as CO2 and H2O absorption continuums. We will present the Net-Exchange Rates matrix that was computed using the Monte-Carlo approach. We will also show how this matrix has been used in order to produce a first-order radiative transfer parameterization that is used in the LMD Venus GCM. In addition, we will present how the proposed radiative transfer model was used in a simple convective-radiative equilibrium model in order to reproduce the main features of Venus' temperature profile.
Luchies, Adam C.; Ghoshal, Goutam; O’Brien, William D.; Oelze, Michael L.
2012-01-01
Quantitative ultrasound (QUS) techniques that parameterize the backscattered power spectrum have demonstrated significant promise for ultrasonic tissue characterization. Some QUS parameters, such as the effective scatterer diameter (ESD), require the assumption that the examined medium contains uniform diffuse scatterers. Structures that invalidate this assumption can significantly affect the estimated QUS parameters and decrease performance when classifying disease. In this work, a method was developed to reduce the effects of echoes that invalidate the assumption of diffuse scattering. To accomplish this task, backscattered signal sections containing non-diffuse echoes were identified and removed from the QUS analysis. Parameters estimated from the generalized spectrum (GS) and the Rayleigh SNR parameter were compared for detecting data blocks with non-diffuse echoes. Simulations and experiments were used to evaluate the effectiveness of the method. Experiments consisted of estimating QUS parameters from spontaneous fibroadenomas in rats and from beef liver samples. Results indicated that the method was able to significantly reduce or eliminate the effects of non-diffuse echoes that might exist in the backscattered signal. For example, the average reduction in the relative standard deviation of ESD estimates from simulation, rat fibroadenomas, and beef liver samples were 13%, 30%, and 51%, respectively. The Rayleigh SNR parameter performed best at detecting non-diffuse echoes for the purpose of removing and reducing ESD bias and variance. The method provides a means to improve the diagnostic capabilities of QUS techniques by allowing separate analysis of diffuse and non-diffuse scatterers. PMID:22622974
Parameterization guidelines and considerations for hydrologic models
R. W. Malone; G. Yagow; C. Baffaut; M.W Gitau; Z. Qi; Devendra Amatya; P.B. Parajuli; J.V. Bonta; T.R. Green
2015-01-01
Â Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...
Modified-Dewan Optical Turbulence Parameterizations
2007-11-02
Kea Observatories on the Island of Hawaii (Businger et al. 2002) by converting standard Numerical Weather Prediction (NWP) forecast model output into...describing optical turbulence. The Dewan parameterization is also being used to forecast optical seeing conditions for ground-based telescopes at the Mauna
Parameterization guidelines and considerations for hydrologic models
USDA-ARS?s Scientific Manuscript database
Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...
Almost isometric mesh parameterization through abstract domains.
Pietroni, Nico; Tarini, Marco; Cignoni, Paolo
2010-01-01
In this paper, we propose a robust, automatic technique to build a global hi-quality parameterization of a two-manifold triangular mesh. An adaptively chosen 2D domain of the parameterization is built as part of the process. The produced parameterization exhibits very low isometric distortion, because it is globally optimized to preserve both areas and angles. The domain is a collection of equilateral triangular 2D regions enriched with explicit adjacency relationships (it is abstract in the sense that no 3D embedding is necessary). It is tailored to minimize isometric distortion, resulting in excellent parameterization qualities, even when meshes with complex shape and topology are mapped into domains composed of a small number of large continuous regions. Moreover, this domain is, in turn, remapped into a collection of 2D square regions, unlocking many advantages found in quad-based domains (e.g., ease of packing). The technique is tested on a variety of cases, including challenging ones, and compares very favorably with known approaches. An open-source implementation is made available.
Parameterizing cloud condensation nuclei concentrations during HOPE
NASA Astrophysics Data System (ADS)
Hande, Luke B.; Engler, Christa; Hoose, Corinna; Tegen, Ina
2016-09-01
An aerosol model was used to simulate the generation and transport of aerosols over Germany during the HD(CP)2 Observational Prototype Experiment (HOPE) field campaign of 2013. The aerosol number concentrations and size distributions were evaluated against observations, which shows satisfactory agreement in the magnitude and temporal variability of the main aerosol contributors to cloud condensation nuclei (CCN) concentrations. From the modelled aerosol number concentrations, number concentrations of CCN were calculated as a function of vertical velocity using a comprehensive aerosol activation scheme which takes into account the influence of aerosol chemical and physical properties on CCN formation. There is a large amount of spatial variability in aerosol concentrations; however the resulting CCN concentrations vary significantly less over the domain. Temporal variability is large in both aerosols and CCN. A parameterization of the CCN number concentrations is developed for use in models. The technique involves defining a number of best fit functions to capture the dependence of CCN on vertical velocity at different pressure levels. In this way, aerosol chemical and physical properties as well as thermodynamic conditions are taken into account in the new CCN parameterization. A comparison between the parameterization and the CCN estimates from the model data shows excellent agreement. This parameterization may be used in other regions and time periods with a similar aerosol load; furthermore, the technique demonstrated here may be employed in regions dominated by different aerosol species.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Parameterizing time in electronic health record studies.
Hripcsak, George; Albers, David J; Perotte, Adler
2015-07-01
Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and
Control of shortwave radiation parameterization on tropical climate SST-forced simulation
NASA Astrophysics Data System (ADS)
Crétat, Julien; Masson, Sébastien; Berthet, Sarah; Samson, Guillaume; Terray, Pascal; Dudhia, Jimy; Pinsard, Françoise; Hourdin, Christophe
2016-09-01
SST-forced tropical-channel simulations are used to quantify the control of shortwave (SW) parameterization on the mean tropical climate compared to other major model settings (convection, boundary layer turbulence, vertical and horizontal resolutions), and to pinpoint the physical mechanisms whereby this control manifests. Analyses focus on the spatial distribution and magnitude of the net SW radiation budget at the surface (SWnet_SFC), latent heat fluxes, and rainfall at the annual timescale. The model skill and sensitivity to the tested settings are quantified relative to observations and using an ensemble approach. Persistent biases include overestimated SWnet_SFC and too intense hydrological cycle. However, model skill is mainly controlled by SW parameterization, especially the magnitude of SWnet_SFC and rainfall and both the spatial distribution and magnitude of latent heat fluxes over ocean. On the other hand, the spatial distribution of continental rainfall (SWnet_SFC) is mainly influenced by convection parameterization and horizontal resolution (boundary layer parameterization and orography). Physical understanding of the control of SW parameterization is addressed by analyzing the thermal structure of the atmosphere and conducting sensitivity experiments to O3 absorption and SW scattering coefficient. SW parameterization shapes the stability of the atmosphere in two different ways according to whether surface is coupled to atmosphere or not, while O3 absorption has minor effects in our simulations. Over SST-prescribed regions, increasing the amount of SW absorption warms the atmosphere only because surface temperatures are fixed, resulting in increased atmospheric stability. Over land-atmosphere coupled regions, increasing SW absorption warms both atmospheric and surface temperatures, leading to a shift towards a warmer state and a more intense hydrological cycle. This turns in reversal model behavior between land and sea points, with the SW scheme that
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
Parameterization of daily solar global ultraviolet irradiation.
Feister, U; Jäkel, E; Gericke, K
2002-09-01
Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
European upper mantle tomography: adaptively parameterized models
NASA Astrophysics Data System (ADS)
Schäfer, J.; Boschi, L.
2009-04-01
We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush
Parameterization of cloud effects on the absorption of solar radiation
NASA Technical Reports Server (NTRS)
Davies, R.
1983-01-01
A radiation parameterization for the NASA Goddard climate model was developed, tested, and implemented. Interactive and off-hire experiments with the climate model to determine the limitations of the present parameterization scheme are summarized. The parameterization of Cloud absorption in terms of solar zeith angle, column water vapors about the cloud top, and cloud liquid water content is discussed.
Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models
2013-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Parameterization of Cumulus Convective Cloud Systems in...parameterization of cumulus convective clouds in mesoscale numerical weather prediction models OBJECTIVES Conduct detailed studies of cloud ...microphysical processes in order to develop a unified parameterization of boundary layer stratocumulus and trade wind cumulus convective clouds . Develop
Turbulent Mixing Parameterizations for Oceanic Flows and Student Support
2014-09-30
projects is to formulate robust turbulence parameterizations that are applicable for a wide range of oceanic flow conditions. OBJECTIVES The...primary objectives of these projects are to bridge the gap between parameterizations/models for small-scale turbulent mixing developed from fundamental...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Turbulent Mixing Parameterizations for Oceanic Flows
A GCM parameterization for the shortwave radiative properties of water clouds
NASA Technical Reports Server (NTRS)
Slingo, A.
1990-01-01
A new parameterization was developed for predicting the shortwave radiative properties of water clouds, suitable for inclusion in general circulation models (GCMs). The parameterization makes use of the simple relationships found by Slingo and Schrecker, giving the three input parameters required to calculate the cloud radiative properties (the optical depth, single scatter albedo and asymmetry parameter) in terms of the liquid water path and equivalent radius of the drop size distribution. The input parameters are then used to derive the cloud radiative properties, using standard two-stream equations for a single layer. The relationships were originally derived for fairly narrow spectral bands but it was found that it is possible to average the coefficients so as to use a much smaller number of bands, without sacrificing accuracy in calculating the cloud radiative properties. This makes the parameterization fast enough to be included in GCMs. The parameterization was programmed into the radiation scheme used in the U.K. Meteorological Office GCM. This scheme and the 24 band Slingo/Schrecker scheme were compared with each other and with observations, using a variety of published datasets. There is good agreement between the two schemes for both cloud albedo and absorption, even when only four spectral bands are employed in the GCM.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Dielectric function parameterization by penalized splines
NASA Astrophysics Data System (ADS)
Likhachev, Dmitriy V.
2017-06-01
In this article, we investigate the penalized spline (P-spline) approach to restrict flexibility of dielectric function parameterization by B-splines and prevent overfitting of the ellipsometric data. The penalty degree is easily controlled by a certain smoothing parameter. The P-spline approach offers a number of advantages over well-established B-spline parameterization. First of all, it typically uses an equidistant knot arrangement which simplifies the construction of the roughness penalties and makes it computationally efficient. Since P-splines possess the "power of the penalty" property, a selection of the number of knots is no longer crucial, as long as there is a minimum knot number to capture all significant spatial variability of the data curves. We demonstrate the proposed approach by real-data application with ellipsometric spectra from aluminum-coated sample.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
Invariant box-parameterization of neutrino oscillations
Weiler, Thomas J.; Wagner, DJ
1998-10-19
The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n{>=}3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements.
Numerical Archetypal Parameterization for Mesoscale Convective Systems
NASA Astrophysics Data System (ADS)
Yano, J. I.
2015-12-01
Vertical shear tends to organize atmospheric moist convection into multiscale coherent structures. Especially, the counter-gradient vertical transport of horizontal momentum by organized convection can enhance the wind shear and transport kinetic energy upscale. However, this process is not represented by traditional parameterizations. The present paper sets the archetypal dynamical models, originally formulated by the second author, into a parameterization context by utilizing a nonhydrostatic anelastic model with segmentally-constant approximation (NAM-SCA). Using a two-dimensional framework as a starting point, NAM-SCA spontaneously generates propagating tropical squall-lines in a sheared environment. A high numerical efficiency is achieved through a novel compression methodology. The numerically-generated archetypes produce vertical profiles of convective momentum transport that are consistent with the analytic archetype.
Parameterization of Terrain in Army Combat Analysis
1976-03-01
ABSTRACT (Conttm— on rawaraa aldm II naeaaamrr and tdantlfr by aloek mambar) This study presents and evaluates a methodology for parameterizing terrain...interpretation. However, for those studies which do not require exact representation of terrain, a less costly and time consuming method can be used. In...unique realizations of a type of terrain. This capability overcomes the sensitivity of Army study results to a single sample of terrain. When used for
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
Parameterizing surface wind speed over complex topography
NASA Astrophysics Data System (ADS)
Helbig, N.; Mott, R.; Herwijnen, A.; Winstral, A.; Jonas, T.
2017-01-01
Subgrid parameterizations are used in coarse-scale meteorological and land surface models to account for the impact of unresolved topography on wind speed. While various parameterizations have been suggested, these were generally validated on a limited number of measurements in specific geographical areas. We used high-resolution wind fields to investigate which terrain parameters most affect near-surface wind speed over complex topography under neutral conditions. Wind fields were simulated using the Advanced Regional Prediction System (ARPS) on Gaussian random fields as model topographies to cover a wide range of terrain characteristics. We computed coarse-scale wind speed, i.e., a spatial average over the large grid cell accounting for influence of unresolved topography, using a previously suggested subgrid parameterization for the sky view factor. We only require correlation length of subgrid topographic features and mean square slope in the coarse grid cell. Computed coarse-scale wind speed compared well with domain-averaged ARPS wind speed. To further statistically downscale coarse-scale wind speed, we use local, fine-scale topographic parameters, namely, the Laplacian of terrain elevations and mean square slope. Both parameters showed large correlations with fine-scale ARPS wind speed. Comparing downscaled numerical weather prediction wind speed with measurements from a large number of stations throughout Switzerland resulted in overall improved correlations and distribution statistics. Since we used a large number of model topographies to derive the subgrid parameterization and the downscaling framework, both are not scale dependent nor bound to a specific geographic region. Both can readily be implemented since they are based on easy to derive terrain parameters.
Aerosol water parameterization: a single parameter framework
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Abdelkader, M.; Klingmüller, K.; Xu, L.; Penner, J. E.; Fountoukis, C.; Nenes, A.; Lelieveld, J.
2015-11-01
We introduce a framework to efficiently parameterize the aerosol water uptake for mixtures of semi-volatile and non-volatile compounds, based on the coefficient, νi. This solute specific coefficient was introduced in Metzger et al. (2012) to accurately parameterize the single solution hygroscopic growth, considering the Kelvin effect - accounting for the water uptake of concentrated nanometer sized particles up to dilute solutions, i.e., from the compounds relative humidity of deliquescence (RHD) up to supersaturation (Köhler-theory). Here we extend the νi-parameterization from single to mixed solutions. We evaluate our framework at various levels of complexity, by considering the full gas-liquid-solid partitioning for a comprehensive comparison with reference calculations using the E-AIM, EQUISOLV II, ISORROPIA II models as well as textbook examples. We apply our parameterization in EQSAM4clim, the EQuilibrium Simplified Aerosol Model V4 for climate simulations, implemented in a box model and in the global chemistry-climate model EMAC. Our results show: (i) that the νi-approach enables to analytically solve the entire gas-liquid-solid partitioning and the mixed solution water uptake with sufficient accuracy, (ii) that, e.g., pure ammonium nitrate and mixed ammonium nitrate - ammonium sulfate mixtures can be solved with a simple method, and (iii) that the aerosol optical depth (AOD) simulations are in close agreement with remote sensing observations for the year 2005. Long-term evaluation of the EMAC results based on EQSAM4clim and ISORROPIA II will be presented separately.
Unified Parameterization of the Marine Boundary Layer
2010-09-30
information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2 . REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010...boundary layer closure for the convective boundary layer 2 . An EDMF approach to the vertical transport of TKE in convective boundary layers 3. EDMF in...4 implementation and extension to shallow cumulus parameterization is in progress. 2 An integrated TKE-based eddy-diffusivity/mass-flux
Thermonuclear Reaction Rate Parameterization for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Sharp, Jacob; Kozub, Raymond L.; Smith, Michael S.; Scott, Jason; Lingerfelt, Eric
2004-10-01
The knowledge of thermonuclear reaction rates is vital to simulate novae, supernovae, X-ray bursts, and other astrophysical events. To facilitate dissemination of this knowledge, a set of tools has been created for managing reaction rates, located at www.nucastrodata.org. One tool is a rate parameterizer, which provides a parameterization for nuclear reaction rate vs. temperature values in the most widely used functional form. Currently, the parameterizer uses the Levenberg-Marquardt method (LMM), which requires an initial estimate of the best-fit parameters. The initial estimate is currently provided randomly from a preselected pool. To improve the quality of fits, a new, active method of selecting parameters has been developed. The parameters of each set in the pool are altered for a few iterations to replicate the input data as closely as possible. Then, the set which most nearly matches the input data (based on chi squared) is used in the LMM as the initial estimate for the final fitting procedure. A description of the new, active algorithm and its performance will be presented. Supported by the U. S. Department of Energy.
Fire parameterization on a global scale
NASA Astrophysics Data System (ADS)
Pechony, O.; Shindell, D. T.
2009-08-01
We present a convenient physically based global-scale fire parameterization algorithm for global climate models. We indicate environmental conditions favorable for fire occurrence based on calculation of the vapor pressure deficit as a function of location and time. Two ignition models are used. One assumes ubiquitous ignition, the other incorporates natural and anthropogenic sources, as well as anthropogenic fire suppression. Evaluation of the method using Global Precipitation Climatology Project precipitation, National Centers for Environmental Prediction/National Center for Atmospheric Research temperature and relative humidity, and Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf Area Index as a proxy for global vegetation density gives results in remarkable correspondence with global fire patterns observed from the MODIS and Visible and Infrared Scanner satellite instruments. The parameterized fires successfully reproduce the spatial distribution of global fires as well as the seasonal variability. The interannual variability of global fire activity derived from the 20-year advanced very high resolution radiometer record are well reproduced using Goddard Institute for Space Studies general circulation models climate simulations, as is the response to the climate changes following the eruptions of El Chichon and Mount Pinatubo. In conjunction with climate models and data sets on vegetation changes with time, the suggested fire parameterization offers the possibility to estimate relative variations of global fire activity for past and future climates.
Implicit Shape Parameterization for Kansei Design Methodology
NASA Astrophysics Data System (ADS)
Nordgren, Andreas Kjell; Aoyama, Hideki
Implicit shape parameterization for Kansei design is a procedure that use 3D-models, or concepts, to span a shape space for surfaces in the automotive field. A low-dimensional, yet accurate shape descriptor was found by Principal Component Analysis of an ensemble of point-clouds, which were extracted from mesh-based surfaces modeled in a CAD-program. A theoretical background of the procedure is given along with step-by-step instructions for the required data-processing. The results show that complex surfaces can be described very efficiently, and encode design features by an implicit approach that does not rely on error-prone explicit parameterizations. This provides a very intuitive way to explore shapes for a designer, because various design features can simply be introduced by adding new concepts to the ensemble. Complex shapes have been difficult to analyze with Kansei methods due to the large number of parameters involved, but implicit parameterization of design features provides a low-dimensional shape descriptor for efficient data collection, model-building and analysis of emotional content in 3D-surfaces.
A Parameterization Invariant Approach to the Statistical Estimation of the CKM Phase alpha
Morris, Robin D.; Cohen-Tanugi, Johann; /SLAC
2008-04-14
In contrast to previous analyses, we demonstrate a Bayesian approach to the estimation of the CKM phase {alpha} that is invariant to parameterization. We also show that in addition to computing the marginal posterior in a Bayesian manner, the distribution must also be interpreted from a subjective Bayesian viewpoint. Doing so gives a very natural interpretation to the distribution. We also comment on the effect of removing information about {beta}{sup 00}.
New Parameterization of Neutron Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.
1997-01-01
Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.
A parameterization of the evaporation of rainfall
NASA Technical Reports Server (NTRS)
Schlesinger, Michael E.; Oh, Jai-Ho; Rosenfeld, Daniel
1988-01-01
A general theoretical expression for the rainfall rate and the total evaporation rate as a function of the distance below cloud base is developed, and is then specialized to the gamma raindrop size distribution. The theoretical framework is used to analyze the data of Rosenfeld and Mintz (1988) on the radar observations of the rainfall rate as a function of the distance below cloud base, for rain falling from continental convective cells in central South Africa, obtaining a parameterization for the evaporation of rainfall.
New Parameterization of Neutron Absorption Cross Sections
NASA Astrophysics Data System (ADS)
Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.
1997-06-01
Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.
A Physically Based Fractional Cloudiness Parameterization
1990-07-27
34 just beiow the PBL top. and designed for use as a PBL parameterization in a large-scale an infinitesimal "ventilation layer" just above the Earth s...34convective mass flux" concept the observations of Caughey et al. (1982) and Nicholls and introduced by Arakawa (1969) and adopted in many Turton (1986), who...Nicholls and Turton (1986)]. We can interpret y, as the value of x associated with the downdraft air at level B. Since there is a sharp gradient of V
Lightning parameterization in a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Farley, Richard D.; Wu, Gang
1988-01-01
The parameterization of an intracloud lightning discharge has been implemented in our Storm Electrification Model. The initiation, propagation direction, termination and charge redistribution of the discharge are approximated assuming overall charge neutrality. Various simulations involving differing amounts of charge transferred have been done. The effects of the lightning-produced ions on the hydrometeor charges, electric field components and electrical energy depend strongly on the charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge show favorable agreement.
Parameterization-based tracking for the P2 experiment
NASA Astrophysics Data System (ADS)
Sorokin, Iurii
2017-08-01
The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.
A new parameterization of spectral and broadband ocean surface albedo.
Jin, Zhonghai; Qiao, Yanli; Wang, Yingjian; Fang, Yonghua; Yi, Weining
2011-12-19
A simple yet accurate parameterization of spectral and broadband ocean surface albedo has been developed. To facilitate the parameterization and its applications, the albedo is parameterized for the direct and diffuse incident radiation separately, and then each of them is further divided into two components: the contributions from surface and water, respectively. The four albedo components are independent of each other, hence, altering one will not affect the others. Such a designed parameterization scheme is flexible for any future update. Users can simply replace any of the adopted empirical formulations (e.g., the relationship between foam reflectance and wind speed) as desired without a need to change the parameterization scheme. The parameterization is validated by in situ measurements and can be easily implemented into a climate or radiative transfer model.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
On the factorization and fitting of molecular scattering information
NASA Technical Reports Server (NTRS)
Goldflam, R.; Kouri, D. J.; Green, S.
1977-01-01
The reported analysis is based on the factored IOS T-matrix. It is shown that line shape measurements may be used over a range of temperatures to evaluate inelastic scattering cross sections. Basic factorization or parameterization relations are derived by considering the wavefunction equations. The parameterization of cross sections is considered, taking into account the differential scattering amplitude and cross section, integral cross sections, phenomenological cross sections for general relaxation processes, and viscosity and diffusion cross sections. Thermal averages and rates are discussed, giving attention to integral cross sections and rates, and general phenomenological cross sections. The results of computational studies are also presented.
Parameterization of Solar Global Uv Irradiation
NASA Astrophysics Data System (ADS)
Feister, U.; Jaekel, E.; Gericke, K.
Daily doses of solar global UV-B, UV-A, and erythemal irradiation have been param- eterized to be calculated from pyranometer data of global and diffuse irradiation as well as from atmospheric column ozone measured at Potsdam (52 N, 107 m asl). The method has been validated against independent data of measured UV irradiation. A gain of information is provided by use of the parameterization for the three UV compo- nents (UV-B, UV-A and erythemal) referring to average values of UV irradiation. Ap- plying the method to UV irradiation measured at the mountain site Hohenpeissenberg (48 N, 977 m asl) shows that the parameterization even holds under completely differ- ent climatic conditions. On a long-term average (1953 - 2000), parameterized annual UV irradiation values are by 15 % (UV-A) and 21 % (UV-B), respectively, higher at Hohenpeissenberg, than they are at Potsdam. Using measured input data from 27 Ger- man weather stations, the method has been also applied to estimate the spatial distribu- tion of UV irradiation across Germany. Daily global and diffuse irradiation measured at Potsdam (1937 -2000) as well as atmospheric column ozone measured at Potsdam between1964 - 2000 have been used to derive long-term estimates of daily and annual totals of UV irradiation that include the effects of changes in cloudiness, in aerosols and, at least for the period 1964 to 2000, also in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the volcanic eruptions of Mt. Pinatubo in 1991 have substantially enhanced UV-B irradiation in the first half of the 90ies of the last century. The non-linear long-term changes between 1968 and 2000 amount to +4% ...+5% for annual global and UV-A irradiation mainly due to changing cloudiness, and +14% ... +15% for UV-B and erythemal irradiation due to both chang- ing cloudiness and decreasing column ozone. Estimates of long-term changes in UV irradiation derived from data measured at other German sites are
2012-09-30
models. In particular we will: i) develop a Single Column Model ( SCM ) version of the latest operational NOGAPS that can be used to simulate GEWEX Cloud ...unlimited. A framework to evaluate unified parameterizations for seasonal prediction: an LES/ SCM parameterization test-bed Joao Teixeira Jet...iii) develop an integrated framework to use the NOGAPS SCM and the LES model as a parameterization test-bed. APPROACH It is well accepted that sub
Optika : a GUI framework for parameterized applications.
Nusbaum, Kurtis L.
2011-06-01
In the field of scientific computing there are many specialized programs designed for specific applications in areas such as biology, chemistry, and physics. These applications are often very powerful and extraordinarily useful in their respective domains. However, some suffer from a common problem: a non-intuitive, poorly-designed user interface. The purpose of Optika is to address this problem and provide a simple, viable solution. Using only a list of parameters passed to it, Optika can dynamically generate a GUI. This allows the user to specify parameters values in a fashion that is much more intuitive than the traditional 'input decks' used by some parameterized scientific applications. By leveraging the power of Optika, these scientific applications will become more accessible and thus allow their designers to reach a much wider audience while requiring minimal extra development effort.
A parameterization of cloud droplet nucleation
Ghan, S.J. ); Chuang, C.; Penner, J.E. )
1993-01-01
Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-cloud interactions, the droplet nucleation process must be adequately represented. Here we introduce a droplet nucleation parametrization that offers certain advantages over the popular Twomey (1959) parameterization.
A Genus Oblivious Approach to Cross Parameterization
Bennett, J C; Pascucci, V; Joy, K I
2008-06-16
In this paper we present a robust approach to construct a map between two triangulated meshes, M and M{prime} of arbitrary and possibly unequal genus. We introduce a novel initial alignment scheme that allows the user to identify 'landmark tunnels' and/or a 'constrained silhouette' in addition to the standard landmark vertices. To describe the evolution of non-landmark tunnels we automatically derive a continuous deformation from M to M{prime} using a variational implicit approach. Overall, we achieve a cross parameterization scheme that is provably robust in the sense that it can map M to M{prime} without constraints on their relative genus. We provide a number of examples to demonstrate the practical effectiveness of our scheme between meshes of different genus and shape.
Climate impacts of parameterized Nordic Sea overflows
NASA Astrophysics Data System (ADS)
Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.
2010-11-01
A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North
Cumulus parameterizations in chemical transport models
NASA Astrophysics Data System (ADS)
Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.
1995-12-01
Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the
Universal Parameterization of Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Cucinotta, Francis A.; Wilson, John W.
1997-01-01
This paper presents a simple universal parameterization of total reaction cross sections for any system of colliding nuclei that is valid for the entire energy range from a few AMeV to a few AGeV. The universal picture presented here treats proton-nucleus collision as a special case of nucleus-nucleus collision, where the projectile has charge and mass number of one. The parameters are associated with the physics of the collision system. In general terms, Coulomb interaction modifies cross sections at lower energies, and the effects of Pauli blocking are important at higher energies. The agreement between the calculated and experimental data is better than all earlier published results.
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, R. Y.; Li, X. R.
2016-07-01
Deep learning is a typical learning method widely studied in machine learning, pattern recognition, and artificial intelligence. This work investigates the stellar atmospheric parameterization problem by constructing a deep neural network with five layers. The proposed scheme is evaluated on both real spectra from Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with Kurucz's New Opacity Distribution Function (NEWODF) model. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for the effective temperature (T_{eff}/K), 0.0058 for lg (T_{eff}/K), 0.1706 for surface gravity (lg (g/(cm\\cdot s^{-2}))), and 0.1294 dex for metallicity ([Fe/H]), respectively; On the theoretic spectra, the MAEs are 15.34 for T_{eff}/K, 0.0011 for lg (T_{eff}/K), 0.0214 for lg (g/(cm\\cdot s^{-2})), and 0.0121 dex for [Fe/H], respectively.
Parameterizing the Surfzone Cross-shore Diffusivity
NASA Astrophysics Data System (ADS)
Spydell, M. S.; Suanda, S. H.; Feddersen, F.
2016-02-01
Two-dimensional horizontal surfzone eddies are responsible for surfzone diffusion and contribute to the cross-shore exchange between the surfzone and inner shelf. Similar to other turbulent diffusivities, the surfzone horizontal eddy diffusivity depends on Lagrangian properties of the flow such that K = u2 TL where u2 is the turbulent velocity variance and TL is the Lagrangian decorrelation time. Recent work has determined how the surfzone rotational velocity variance u2 depends on properties of the incident wave field, however, what determines the surfzone Lagrangian time-scale TL is not completely understood. Two possibilities are explored: 1) the classic frozen field turbulent scaling in which TL is proportional to the time to traverse an eddy, hence TL approx LE/u where LE is the eddy length-scale; 2) TL is proportional to the time to traverse the surfzone, hence TL approx LS/u where LS is the surfzone width. Either case results in a classic mixing length parameterization so that K approx ul, where l is either the eddy size LE or the surfzone width LS. These two parameterizations are investigated using Eulerian and Lagrangian statistics of surfzone eddies calculated from simulations (with the model funwaveC) of alongshore homogeneous waves, currents, and bathymetry. Simulations are performed for various incident significant wave heights, incident wave directional spreads, and beach slopes. From this suite of simulations, the dependence of the surfzone cross-shore diffusivity on eddy velocities u, eddy lengths LE, and surfzone widths LS, is determined. Surfzone retention rates are also calculated and implications for the exchange of material between the surfzone and inner shelf are discussed.
Parameterized reduced order modeling of misaligned stacked disks rotor assemblies
NASA Astrophysics Data System (ADS)
Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe
2011-01-01
Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this goal at affordable computational costs. It is shown that the disks misalignment leads to significant changes in nominal system properties that manifest themselves as additional blocks coupling neighboring spatial harmonics in Fourier space. Consequently, the misalignment effects can no longer be accurately modeled as equivalent forces applied to a nominal unperturbed system. The fact that the mode shapes become heavily distorted by extra harmonic content renders the nominal modal projection-based methods inaccurate and thus numerically ineffective in the context of repeated analysis of multiple misalignment realizations. The significant numerical bottleneck is removed by employing an orthogonal projection onto the subspace spanned by first few Fourier harmonic basis vectors. The projected highly sparse systems are shown to accurately approximate the specific misalignment effects, to be inexpensive to solve using direct sparse methods and easy to parameterize with a small set of measurable eccentricity and tilt angle parameters. Selected numerical examples on an industrial scale model are presented to illustrate the accuracy and efficiency of the algorithm implementation.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Evaluation of a GCM cirrus parameterization using satellite observations
NASA Technical Reports Server (NTRS)
Soden, B. J.; Donner, L. J.
1994-01-01
This study applies a simple yet effective methodology to validate a general circulation model parameterization of cirrus ice water path. The methodology combines large-scale dynamic and thermodynamic fields from operational analyses with prescribed occurrence of cirrus clouds from satellite observations to simulate a global distribution of ice water path. The predicted cloud properties are then compared with the corresponding satellite measurements of visible optical depth and infrared cloud emissivity to evaluate the reliability of the parameterization. This methodology enables the validation to focus strictly on the water loading side of the parameterization by eliminating uncertainties involved in predicting the occurrence of cirrus internally within the parameterization. Overall the parameterization performs remarkably well in capturing the observed spatial patterns of cirrus optical properties. Spatial correlations between the observed and the predicted optical depths are typically greater than 0.7 for the tropics and northern hemisphere midlatitudes. The good spatial agreement largely stems from the strong dependence of the ice water path upon the temperature of the environment in which the clouds form. Poorer correlations (r approximately 0.3) are noted over the southern hemisphere midlatitudes, suggesting that additional processes not accounted for by the parameterization may be important there. Quantitative evaluation of the parameterization is hindered by the present uncertainty in the size distribution of cirrus ice particles. Consequently, it is difficult to determine if discrepancies between the observed and the predicted optical properties are attributable to errors in the parameterized ice water path or to geographic variations in effective radii.
Construction of groupwise consistent shape parameterizations by propagation
NASA Astrophysics Data System (ADS)
Kirschner, Matthias; Wesarg, Stefan
2010-03-01
Prior knowledge can highly improve the accuracy of segmentation algorithms for 3D medical images. A popular method for describing the variability of shape of organs are statistical shape models. One of the greatest challenges in statistical shape modeling is to compute a representation of the training shapes as vectors of corresponding landmarks, which is required to train the model. Many algorithms for extracting such landmark vectors work on parameter space representations of the unnormalized training shapes. These algorithms are sensitive to inconsistent parameterizations: If corresponding regions in the training shapes are mapped to different areas of the parameter space, convergence time increases or the algorithms even fail to converge. In order to improve robustness and decrease convergence time, it is crucial that the training shapes are parameterized in a consistent manner. We present a novel algorithm for the construction of groupwise consistent parameterizations for a set of training shapes with genus-0 topology. Our algorithm firstly computes an area-preserving parameterization of a single reference shape, which is then propagated to all other shapes in the training set. As the parameter space propagation is controlled by approximate correspondences derived from a shape alignment algorithm, the resulting parameterizations are consistent. Additionally, the area-preservation property of the reference parameterization is likewise propagated such that all training shapes can be reconstructed from the generated parameterizations with a simple uniform sampling technique. Though our algorithm considers consistency as an additional constraint, it is faster than computing parameterizations for each training shape independently from scratch.
Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models
2012-09-30
and the 6th moments. The development and testing of the parameterization was made using the CIMMS LES explicit warm rain microphysical model. The...implemented into the 3D dynamical framework of the CIMMS LES model where the errors of the parameterization were assessed in a realistic setting. The
NASA Astrophysics Data System (ADS)
Wong, J.; Noone, D. C.; Barth, M. C.
2011-12-01
Lightning NOx (LNOx) is an important precursor to tropospheric ozone production and monsoonal upper tropospheric ozone enhancement. A parameterization for LNOx emission is designed for convective-parameterized synoptic meteorological-scale predictions in the NCAR Weather Research and Forecasting Model with Chemistry (WRF-Chem). The implementation uses the Price and Rind (1992) flash rate equation to produce a flash density as a function of cloud height. A fixed emission rate of 500 moles NO per flash and Gaussian vertical distributions are then used to produce the predicted LNOx emission. Comparison of the results from a month long simulation over continental United States against a multiyear climatology based on Optical Transient Detector (OTD) computed by Boccippio et al (2000) shows confidence in reproducing the proper geographical distribution. Regional comparison against National Lightning Detection Network (NLDN) data also shows confidence of using a constant tuning parameter to produce a flash density within the order of magnitude of that observed with consideration of model bias in convection. The produced tropospheric NO2 column also matches well (reduced χ2=0.88) with SCHIAMACHY NO2 vertical column density. Several sensitivity simulations are also performed to evaluate the model's response to the parameterization in ozone and related species such as isoprene and formaldehyde. Results show that the species-specific sensitivities to LNOx emission are significantly altered by convective detrainment as well as the variability of NOx residence time throughout the troposphere from the prescribed vertical distribution.
Stochastic parameterization testing with NOAA's developmental Global Ensemble Forecast System
NASA Astrophysics Data System (ADS)
Hamill, Thomas M.
2017-04-01
In the next few years, the US National Weather Service will be switching the production of its global ensemble forecast system (GEFS) from the current spectrally based dynamical core to a finite-volume dynamical core (FV3). A suite of stochastic parameterizations, some developed at other centres, have been developed for the spectral and then adapted for the FV3 dynamical core. The stochastic parameterizations include the SPPT scheme developed at ECMWF and a stochastically perturbed boundary-layer humidity scheme (SHUM) developed within NOAA. The stochastic parameterizations appear more active in the FV3 developmental system with the same parameter settings used in the spectral-based system, and probabilistic skill scores are competitive with or better than with the old spectral core. This talk will review the particular implementation of the stochastic parameterizations in FV3, compare probabilistic forecasts between the old and new system, and discuss the underlying reasons for greater activity of stochastic parameterizations in FV3.
Parameterizing Size Distribution in Ice Clouds
DeSlover, Daniel; Mitchell, David L.
2009-09-25
PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice
Parameterization of Incident and Infragravity Swash Variance
NASA Astrophysics Data System (ADS)
Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.
2002-12-01
By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, Ru-yang; Li, Xiang-ru
2017-07-01
Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.
Parameterized logarithmic framework for image enhancement.
Panetta, Karen; Agaian, Sos; Zhou, Yicong; Wharton, Eric J
2011-04-01
Image processing technologies such as image enhancement generally utilize linear arithmetic operations to manipulate images. Recently, Jourlin and Pinoli successfully used the logarithmic image processing (LIP) model for several applications of image processing such as image enhancement and segmentation. In this paper, we introduce a parameterized LIP (PLIP) model that spans both the linear arithmetic and LIP operations and all scenarios in between within a single unified model. We also introduce both frequency- and spatial-domain PLIP-based image enhancement methods, including the PLIP Lee's algorithm, PLIP bihistogram equalization, and the PLIP alpha rooting. Computer simulations and comparisons demonstrate that the new PLIP model allows the user to obtain improved enhancement performance by changing only the PLIP parameters, to yield better image fusion results by utilizing the PLIP addition or image multiplication, to represent a larger span of cases than the LIP and linear arithmetic cases by changing parameters, and to utilize and illustrate the logarithmic exponential operation for image fusion and enhancement.
Parameterizations with and without Climate Process Teams
NASA Astrophysics Data System (ADS)
Fox-Kemper, B.
2016-12-01
I will contrast the science, the development process, and the applications behind four different parameterizations where I was involved. One-restratification by mixed layer eddies-was developed as part of a Climate Process Team for global models. A second-Langmuir turbulence-was developed through a series of collaborative funding awards (i.e., a self-organizing climate process team) also intent on improving global models. The third-symmetric instability-was developed without direct funding and finalized while on sabbatical. It is suited to submesoscale-permitting simulations. The fourth-a closure for forward potential enstrophy cascades-was begun as a byproduct of a climate process team, then spawned its own follow-on funding. It is appropriate when mesoscale eddies are well-resolved, i.e., mesoscale ocean large eddy simulations. The degree of evaluation and depth of understanding differs by past work and difficulty of each problem, but also by the logistics of the collaboration.
Reaction Rate Parameterization for Nuclear Astrophysics Research
NASA Astrophysics Data System (ADS)
Scott, J. P.; Lingerfelt, E. J.; Smith, M. S.; Hix, W. R.; Bardayan, D. W.; Sharp, J. E.; Kozub, R. L.; Meyer, R. A.
2004-11-01
Libraries of thermonuclear reaction rates are used in element synthesis models of a wide variety of astrophysical phenomena, such as exploding stars and the inner workings of our sun. These computationally demanding models are more efficient when libraries, which may contain over 60000 rates and vary by 20 orders of magnitude, have a uniform parameterization for all rates. We have developed an on-line tool, hosted at www.nucastrodata.org, to obtain REACLIB parameters (F.-K. Thielemann et al., Adv. Nucl. Astrophysics 525, 1 (1987)) that represent reaction rates as a function of temperature. This helps to rapidly incorporate the latest nuclear physics results in astrophysics models. The tool uses numerous techniques and algorithms in a modular fashion to improve the quality of the fits to the rates. Features, modules, and additional applications of this tool will be discussed. * Managed by UT-Battelle, LLC, for the U.S. D.O.E. under contract DE-AC05-00OR22725 + Supported by U.S. D.O.E. under Grant No. DE-FG02-96ER40955
Parameterizing loop fusion for automated empirical tuning
Zhao, Y; Yi, Q; Kennedy, K; Quinlan, D; Vuduc, R
2005-12-15
Traditional compilers are limited in their ability to optimize applications for different architectures because statically modeling the effect of specific optimizations on different hardware implementations is difficult. Recent research has been addressing this issue through the use of empirical tuning, which uses trial executions to determine the optimization parameters that are most effective on a particular hardware platform. In this paper, we investigate empirical tuning of loop fusion, an important transformation for optimizing a significant class of real-world applications. In spite of its usefulness, fusion has attracted little attention from previous empirical tuning research, partially because it is much harder to configure than transformations like loop blocking and unrolling. This paper presents novel compiler techniques that extend conventional fusion algorithms to parameterize their output when optimizing a computation, thus allowing the compiler to formulate the entire configuration space for loop fusion using a sequence of integer parameters. The compiler can then employ an external empirical search engine to find the optimal operating point within the space of legal fusion configurations and generate the final optimized code using a simple code transformation system. We have implemented our approach within our compiler infrastructure and conducted preliminary experiments using a simple empirical search strategy. Our results convey new insights on the interaction of loop fusion with limited hardware resources, such as available registers, while confirming conventional wisdom about the effectiveness of loop fusion in improving application performance.
Carbody structural lightweighting based on implicit parameterized model
NASA Astrophysics Data System (ADS)
Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen
2014-05-01
Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.
Brain surface conformal parameterization using Riemann surface structure.
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M; Chan, Tony F; Toga, Arthur W; Thompson, Paul M; Yau, Shing-Tung
2007-06-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks.
A Gaussian-product stochastic Gent-McWilliams parameterization
NASA Astrophysics Data System (ADS)
Grooms, I.
2016-12-01
The locally-averaged horizontal buoyancy flux by mesoscale eddies is computed from eddy-resolving quasigeostrophic simulations of ocean-mesoscale eddy dynamics. This flux has a very non-Gaussian distribution peaked at zero, not at the mean value. This non-Gaussian flux distribution arises because the flux is a product of zero-mean random variables: the eddy velocity and buoyancy. A framework for stochastic Gent-McWilliams (GM) parameterization based around stochastic parameterization of the horizontal subgrid-scale density flux is presented. Gaussian random field models for subgrid-scale velocity and buoyancy are developed. The product of these Gaussian random fields is used to construct a non-Gaussian stochastic parameterization of the horizontal subgrid-scale density flux, which leads to a non-Gaussian stochastic GM parameterization. This new parameterization is tested in an idealized box ocean model, and compared to a Gaussian approach that simply multiplies the deterministic GM parameterization by a Gaussian random field. The non-Gaussian approach has a significant impact on both the mean and variability of the simulations, more so than the Gaussian approach; for example, the non-Gaussian simulation has a much larger net kinetic energy and a stronger overturning circulation than a comparable Gaussian simulation. Future directions for development of the stochastic GM parameterization and extensions of the Gaussian-product approach are discussed.
'Arm-based' parameterization for network meta-analysis.
Hawkins, Neil; Scott, David A; Woods, Beth
2016-09-01
We present an alternative to the contrast-based parameterization used in a number of publications for network meta-analysis. This alternative "arm-based" parameterization offers a number of advantages: it allows for a "long" normalized data structure that remains constant regardless of the number of comparators; it can be used to directly incorporate individual patient data into the analysis; the incorporation of multi-arm trials is straightforward and avoids the need to generate a multivariate distribution describing treatment effects; there is a direct mapping between the parameterization and the analysis script in languages such as WinBUGS and finally, the arm-based parameterization allows simple extension to treatment-specific random treatment effect variances. We validated the parameterization using a published smoking cessation dataset. Network meta-analysis using arm- and contrast-based parameterizations produced comparable results (with means and standard deviations being within +/- 0.01) for both fixed and random effects models. We recommend that analysts consider using arm-based parameterization when carrying out network meta-analyses. © 2015 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2015 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Review of PAR parameterizations in ocean ecosystem models
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Wang, Xiao Hua; Hart, Deirdre E.; Zavatarelli, Marco
2014-12-01
Commonly-used empirical equations for calculating downward 'photosynthetically available radiation' or PAR were reviewed in order to identify a more theoretically-sound parameterization for application to ocean biogeochemical models. Three different forms of broadband PAR parameterization are currently employed in biogeochemical models, each of them originating from the downward irradiance formulations normally applied to ocean circulation models, which produce poor attenuation estimates for PAR. Two of the PAR formulations, a single-exponential function and a double-exponential function, are parameterized by multiplying surface irradiance by a coefficient determining the portion of underwater PAR. The third formulation uses the second term of the double-exponential function. After elucidating the theoretical problems of modeling PAR using these parameterizations, we suggest an improved, R-modified double-exponential PAR formulation, including Paulson and Simpson's (1977) parameter values. We also newly estimate PAR penetration via least-squares fitting of values digitized from Jerlov's (1976) observations in different oceanic water types, and compare this PAR-observation derived parameterization with our new, theoretical, R-modified parameterization. Finally, we discuss a universal limitation inherent in current theoretical approaches to PAR parameterization.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Dynamically consistent parameterization of mesoscale eddies. Part I: Simple model
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2015-03-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced cumulative eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of the footprint strongly depend on the underlying large-scale and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Progress on wave-ice interactions: satellite observations and model parameterizations
NASA Astrophysics Data System (ADS)
Ardhuin, Fabrice; Boutin, Guillaume; Dumont, Dany; Stopa, Justin; Girard-Ardhuin, Fanny; Accensi, Mickael
2017-04-01
In the open ocean, numerical wave models have their largest errors near sea ice, and, until recently, virtually no wave data was available in the sea ice to. Further, wave-ice interaction processes may play an important role in the Earth system. In particular, waves may break up an ice layer into floes, with significant impact on air-sea fluxes. With thinner Arctic ice, this process may contribut to the growing similarity between Arctic and Antarctic sea ice. In return, the ice has a strong damping impact on the waves that is highly variable and not understood. Here we report progress on parameterizations of waves interacting with a single ice layer, as implemented in the WAVEWATCH III model (WW3 Development Group, 2016), and based on few in situ observations, but extensive data derived from Synthetic Aperture Radars (SARs). Our parameterizations combine three processes. First a parameterization for the energy-conserving scattering of waves by ice floes (assuming isotropic back-scatter), which has very little effect on dominant waves of periods larger than 7 s, consistent with the observed narrow directional spectra and short travel times. Second, we implemented a basal friction below the ice layer (Stopa et al. The Cryosphere, 2016). Third, we use a secondary creep associated with ice flexure (Cole et al. 1998) adapted to random waves. These three processes (scattering, friction and creep) are strongly dependent on the maximum floe size. We have thus included an estimation of the potential floe size based on an ice flexure failure estimation adapted from Williams et al. (2013). This combination of dissipation and scattering is tested against measured patterns of wave height and directional spreading, and evidence of ice break-up, all obtained from SAR imagery (Ardhuin et al. 2017), and some in situ data (Collins et al. 2015). The combination of creep and friction is required to reproduce a strong reduction in wave attenuation in broken ice as observed by Collins
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
... taken out at the same time as the tonsils ( tonsillectomy ). Adenoid removal is also called adenoidectomy. The procedure is most often done in children. ... can be removed again if necessary. Alternative Names Adenoidectomy; Removal of ... Instructions Tonsil and adenoid removal - discharge Tonsil removal - what to ...
Brydegaard, Mikkel
2015-01-01
In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna) has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors. PMID:26295706
Parameterizing Stellar Spectra Using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing
2017-03-01
Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.
Parameterization of cloud glaciation by atmospheric dust
NASA Astrophysics Data System (ADS)
Nickovic, Slobodan; Cvetkovic, Bojan; Madonna, Fabio; Pejanovic, Goran; Petkovic, Slavko
2016-04-01
The exponential growth of research interest on ice nucleation (IN) is motivated, inter alias, by needs to improve generally unsatisfactory representation of cold cloud formation in atmospheric models, and therefore to increase the accuracy of weather and climate predictions, including better forecasting of precipitation. Research shows that mineral dust significantly contributes to cloud ice nucleation. Samples of residual particles in cloud ice crystals collected by aircraft measurements performed in the upper tropopause of regions distant from desert sources indicate that dust particles dominate over other known ice nuclei such as soot and biological particles. In the nucleation process, dust chemical aging had minor effects. The observational evidence on IN processes has substantially improved over the last decade and clearly shows that there is a significant correlation between IN concentrations and the concentrations of coarser aerosol at a given temperature and moisture. Most recently, due to recognition of the dominant role of dust as ice nuclei, parameterizations for immersion and deposition icing specifically due to dust have been developed. Based on these achievements, we have developed a real-time forecasting coupled atmosphere-dust modelling system capable to operationally predict occurrence of cold clouds generated by dust. We have been thoroughly validated model simulations against available remote sensing observations. We have used the CNR-IMAA Potenza lidar and cloud radar observations to explore the model capability to represent vertical features of the cloud and aerosol vertical profiles. We also utilized the MSG-SEVIRI and MODIS satellite data to examine the accuracy of the simulated horizontal distribution of cold clouds. Based on the obtained encouraging verification scores, operational experimental prediction of ice clouds nucleated by dust has been introduced in the Serbian Hydrometeorological Service as a public available product.
Parameterizing Stellar Spectra Using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing
2017-03-01
Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.
Rich parameterization improves RNA structure prediction.
Zakov, Shay; Goldberg, Yoav; Elhadad, Michael; Ziv-Ukelson, Michal
2011-11-01
Current approaches to RNA structure prediction range from physics-based methods, which rely on thousands of experimentally measured thermodynamic parameters, to machine-learning (ML) techniques. While the methods for parameter estimation are successfully shifting toward ML-based approaches, the model parameterizations so far remained fairly constant. We study the potential contribution of increasing the amount of information utilized by RNA folding prediction models to the improvement of their prediction quality. This is achieved by proposing novel models, which refine previous ones by examining more types of structural elements, and larger sequential contexts for these elements. Our proposed fine-grained models are made practical thanks to the availability of large training sets, advances in machine-learning, and recent accelerations to RNA folding algorithms. We show that the application of more detailed models indeed improves prediction quality, while the corresponding running time of the folding algorithm remains fast. An additional important outcome of this experiment is a new RNA folding prediction model (coupled with a freely available implementation), which results in a significantly higher prediction quality than that of previous models. This final model has about 70,000 free parameters, several orders of magnitude more than previous models. Being trained and tested over the same comprehensive data sets, our model achieves a score of 84% according to the F₁-measure over correctly-predicted base-pairs (i.e., 16% error rate), compared to the previously best reported score of 70% (i.e., 30% error rate). That is, the new model yields an error reduction of about 50%. Trained models and source code are available at www.cs.bgu.ac.il/?negevcb/contextfold.
NASA Astrophysics Data System (ADS)
Ramaswamy, V.; Freidenreich, S. M.
1992-07-01
Reference radiative transfer solutions in the near-infrared spectrum, which account for the spectral absorption characteristics of the water vapor molecule and the absorbing-scattering features of water drops, are employed to investigate and develop broadband treatments of solar water vapor absorption and cloud radiative effects. The conceptually simple and widely used Lacis-Hansen parameterization for solar water vapor absorption is modified so as to yield excellent agreement in the clear sky heating rates. The problem of single cloud decks over a nonreflecting surface is used to highlight the factors involved in the development of broadband overcast sky parameterizations. Three factors warrant considerable attention: (1) the manner in which the spectrally dependent drop single-scattering values are used to obtain the broadband cloud radiative properties, (2) the effect of the spectral attenuation by the vapor above the cloud on the determination of the broadband drop reflection and transmission, and (3) the broadband treatment of the spectrally dependent absorption due to drops and vapor inside the cloud. The solar flux convergence in clouds is very sensitive to all these considerations. Ignoring effect 2 tends to overestimate the cloud heating, particularly for low clouds, while a poor treatment of effect 3 leads to an underestimate. A new parameterization that accounts for the aforementioned considerations is accurate to within ˜30% over a wide range of overcast sky conditions, including solar zenith angles and cloud characteristics (altitudes, drop models, optical depths, and geometrical thicknesses), with the largest inaccuracies occurring for geometrically thick, extended cloud systems containing large amounts of vapor. Broadband methods that treat improperly one or more of the above considerations can yield substantially higher errors (>35%) for some overcast sky conditions while having better agreements over limited portions of the parameter range. For
Parameterization of cirrus optical depth and cloud fraction
Soden, B.
1995-09-01
This research illustrates the utility of combining satellite observations and operational analysis for the evaluation of parameterizations. A parameterization based on ice water path (IWP) captures the observed spatial patterns of tropical cirrus optical depth. The strong temperature dependence of cirrus ice water path in both the observations and the parameterization is probably responsible for the good correlation where it exists. Poorer agreement is found in Southern Hemisphere mid-latitudes where the temperature dependence breaks down. Uncertainties in effective radius limit quantitative validation of the parameterization (and its inclusion into GCMs). Also, it is found that monthly mean cloud cover can be predicted within an RMS error of 10% using ECMWF relative humidity corrected by TOVS Upper Troposphere Humidity. 1 ref., 2 figs.
Some applications of parameterized Picard-Vessiot theory
NASA Astrophysics Data System (ADS)
Mitschi, C.
2016-02-01
This is an expository article describing some applications of parameterized Picard-Vessiot theory. This Galois theory for parameterized linear differential equations was Cassidy and Singer's contribution to an earlier volume dedicated to the memory of Andrey Bolibrukh. The main results we present here were obtained for families of ordinary differential equations with parameterized regular singularities in joint work with Singer. They include parametric versions of Schlesinger's theorem and of the weak Riemann-Hilbert problem as well as an algebraic characterization of a special type of monodromy evolving deformations illustrated by the classical Darboux-Halphen equation. Some of these results have recently been applied by different authors to solve the inverse problem of parameterized Picard-Vessiot theory, and were also generalized to irregular singularities. We sketch some of these results by other authors. The paper includes a brief history of the Darboux-Halphen equation as well as an appendix on differentially closed fields.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
A simple lightning parameterization for calculating global lightning distributions
NASA Technical Reports Server (NTRS)
Price, Colin; Rind, David
1992-01-01
A simple parameterization has been developed to simulate global lightning distributions. Convective cloud top height is used as the variable in the parameterization, with different formulations for continental and marine thunderstorms. The parameterization has been validated using two lightning data sets: one global and one regional. In both cases the simulated lightning distributions and frequencies are in very good agreement with the observed lightning data. This parameterization could be used for global studies of lightning climatology; the earth's electric circuit; in general circulation models for modeling global lightning activity, atmospheric NO(x) concentrations, and perhaps forest fire distributions for both the present and future climate; and, possibly, even as a short-term forecasting aid.
Parameterization of Frontal Symmetric Instabilities. I: Theory for Resolved Fronts
NASA Astrophysics Data System (ADS)
Bachman, S. D.; Fox-Kemper, B.; Taylor, J. R.; Thomas, L. N.
2017-01-01
A parameterization is proposed for the effects of symmetric instability (SI) on a resolved front. The parameterization is dependent on external forcing by surface buoyancy loss and/or down-front winds, which reduce potential vorticity (PV) and lead to conditions favorable for SI. The parameterization consists of three parts. The first part is a specification for the vertical eddy viscosity, which is derived from a specified ageostrophic circulation resulting from the balance of the Coriolis force and a Reynolds momentum flux (a turbulent Ekman balance), with a previously proposed vertical structure function for the geostrophic shear production. The vertical structure of the eddy viscosity is constructed to extract the mean kinetic energy of the front at a rate consistent with resolved SI. The second part of the parameterization represents a near-surface convective layer whose depth is determined by a previously proposed polynomial equation. The third part of the parameterization represents diffusive tracer mixing through small-scale shear instabilities and SI. The diabatic, vertical component of this diffusivity is set to be proportional to the eddy viscosity using a turbulent Prandtl number, and the along-isopycnal tracer mixing is represented by an anisotropic diffusivity tensor. Preliminary testing of the parameterization using a set of idealized models shows that the extraction of total energy of the front is consistent with that from SI-resolving LES, while yielding mixed layer stratification, momentum, and potential vorticity profiles that compare favorably to those from an extant boundary layer parameterization (Large et al., 1994). The new parameterization is also shown to improve the vertical mixing of a passive tracer in the LES.
Mcfast, a Parameterized Fast Monte Carlo for Detector Studies
NASA Astrophysics Data System (ADS)
Boehnlein, Amber S.
McFast is a modularized and parameterized fast Monte Carlo program which is designed to generate physics analysis information for different detector configurations and subdetector designs. McFast is based on simple geometrical object definitions and includes hit generation, parameterized track generation, vertexing, a muon system, electromagnetic calorimetry, and trigger framework for physics studies. Auxiliary tools include a geometry editor, visualization, and an i/o system.
Numerical Testing of Parameterization Schemes for Solving Parameter Estimation Problems
2008-12-01
1 NUMERICAL TESTING OF PARAMETERIZATION SCHEMES FOR SOLVING PARAMETER ESTIMATION PROBLEMS L. Velázquez*, M. Argáez and C. Quintero The...performance computing (HPC). 1. INTRODUCTION In this paper we present the numerical performance of three parameterization approaches, SVD...wavelets, and the combination of wavelet-SVD for solving automated parameter estimation problems based on the SPSA described in previous reports of this
A Gaussian-product stochastic Gent-McWilliams parameterization
NASA Astrophysics Data System (ADS)
Grooms, Ian
2016-10-01
The locally-averaged horizontal buoyancy flux by mesoscale eddies is computed from eddy-resolving quasigeostrophic simulations of ocean-mesoscale eddy dynamics. This flux has a very non-Gaussian distribution peaked at zero, not at the mean value. This non-Gaussian flux distribution arises because the flux is a product of zero-mean random variables: the eddy velocity and buoyancy. A framework for stochastic Gent-McWilliams (GM) parameterization is presented. Gaussian random field models for subgrid-scale velocity and buoyancy are developed. The product of these Gaussian random fields is used to construct a non-Gaussian stochastic parameterization of the horizontal subgrid-scale density flux, which leads to a non-Gaussian stochastic GM parameterization. This new non-Gaussian stochastic GM parameterization is tested in an idealized box ocean model, and compared to a Gaussian approach that simply multiplies the deterministic GM parameterization by a Gaussian random field. The non-Gaussian approach has a significant impact on both the mean and variability of the simulations, more so than the Gaussian approach; for example, the non-Gaussian simulation has a much larger net kinetic energy and a stronger overturning circulation than a comparable Gaussian simulation. Future directions for development of the stochastic GM parameterization and extensions of the Gaussian-product approach are discussed.
A framework for parameterization of heterogeneous ocean convection
NASA Astrophysics Data System (ADS)
Ilıcak, Mehmet; Adcroft, Alistair J.; Legg, Sonya
2014-10-01
We propose a new framework for parameterization of ocean convection processes. The new framework is termed “patchy convection” since our aim is to represent the heterogeneity of mixing processes that take place within the horizontal scope of a grid cell. We focus on applying this new scheme to represent the effect of pre-conditioning for deep convection by subgrid scale eddy variability. The new parameterization separates the grid-cell into two regions of different stratification, applies convective mixing separately to each region, and then recombines the density profile to produce the grid-cell mean density profile. The scheme depends on two parameters: the areal fraction of the vertically-mixed region within the horizontal grid cell, and the density difference between the mean and the unstratified profiles at the surface. We parameterize this density difference in terms of an unresolved eddy kinetic energy. We illustrate the patchy parameterization using a 1D idealized convection case before evaluating the scheme in two different global ocean-ice simulations with prescribed atmospheric forcing; (i) diagnosed eddy velocity field applied only in the Labrador Sea (ii) diagnosed global eddy velocity field. The global simulation results indicate that the patchy convection scheme improves the warm biases in the deep Atlantic Ocean and Southern Ocean. This proof-of-concept study is a first step in developing the patchy parameterization scheme, which will be extended in future to use a prognostic eddy field as well as to parameterize convection due to under-ice brine rejection.
Modelling Submesoscale Dynamics: A New Parameterization for Symmetric Instability
NASA Astrophysics Data System (ADS)
Bachman, S.; Thomas, L. N.; Taylor, J. R.; Fox-Kemper, B.
2016-02-01
Next-generation ocean models are expected to routinely resolve dynamics at 1/4 degree or smaller, offering new challenges in modelling subgridscale physics. These models are entering a regime where the unresolved turbulence is less constrained by planetary rotation, requiring a paradigm shift in the way modellers construct turbulence closures. Of particular importance is the representation of submesoscale turbulence, occupying O(1-10) km scales, which plays a leading role in setting the stratification of the surface mixed layer and mediating air-sea fluxes. This talk will introduce the submesoscale parameterization problem by presenting a few extant parameterizations, and will focus on a special type of fluid instability for which no parameterization has previously been developed: symmetric instability (SI). The theory and dynamics of SI will be discussed, from which a new parameterization will be proposed. This parameterization is dependent on external forcing by either surface buoyancy loss or down-front winds, which reduce potential vorticity (PV) and lead to conditions favorable for SI. Preliminary testing of the parameterization using a set of idealized models shows that the induced vertical fluxes of passive tracers and momentum are consistent with those from SI-resolving Large Eddy Simulations.
Parameterization of Cloud Droplet Formation in Global Climate Models
NASA Technical Reports Server (NTRS)
Nenes, A.; Seinfeld, J.H.
2003-01-01
An aerosol activation parameterization has been developed based on a generalized representation of aerosol size and composition within the framework of an ascending adiabatic parcel; this allows for parameterizing the activation of chemically complex aerosol with an arbitrary size distribution and mixing state. The new parameterization introduces the concept of"population splitting", in which the cloud condensation nuclei (CCN) that form droplets are treated as two separate populations; those that have a size close to their critical diameter and those that do not.Explicit consideration of kinetic limitations of droplet growth is introduced. Our treatment of the activation process unravels much of its complexity. As a result of this, a substantial number of conditions of droplet formation can be treated completely free of empirical information or correlations; there are, however, some conditions of droplet activation for which an empirically derived correlation is utilized. Predictions of the parameterization are compared against extensive cloud parcel model simu;lations for a variety of aerosol activation conditions that cover a wide range of chemical variability and CCN concentrations. The parameterization tracks the parcel model simulations closely and robustly. The parameterization presented here is intended to allow for a comprehensive assessment of the aerosol indirect effect in general circulation models.
Impact of different weakening parameterizations on crust and lithosphere deformation
NASA Astrophysics Data System (ADS)
Thielmann, Marcel
2017-04-01
Rocks typically exhibit a decrease in strength with ongoing deformation. This decrease in strength is often related to processes that occur on the grain scale, such as grain size reduction, fluid percolation, interconnection of weak phases etc. Other processes that affect deformation on a larger scale include e.g. shear heating and the formation of oriented fault arrays. In numerical geodynamical models, the weakening behavior is usually accounted for by introducing a simple strain weakening parameterization. However, those parameterizations are mostly ad hoc and do not consider the underlying physical mechanisms that control the amount and transient behavior of the weakening process. Here, I study the impact of different strain weakening parameterizations on crustal and lithosphere deformation using two dimensional finite element models. Through a parametric study, I show the effect of different parameters that enter the weakening parameterization. As expected, the stress field of the lithosphere and its transient evolution during extension/compression is strongly affected by the shape of the strain weakening parameterization. Additionally, many physical processes resulting in weakening do in fact no depend on the amount of strain a rock has experienced, but rather on the deformational work that has been used to deform the rock. Treating weakening as a work-dependent property also facilitates conservation of energy. For this reason, I also investigate the effect of employing work-weakening parameterizations in numerical models of lithosphere deformation and highlight differences to conventional strain weakening formulations.
Partwise cross-parameterization via nonregular convex hull domains.
Wu, Huai-Yu; Pan, Chunhong; Zha, Hongbin; Yang, Qing; Ma, Songde
2011-10-01
In this paper, we propose a novel partwise framework for cross-parameterization between 3D mesh models. Unlike most existing methods that use regular parameterization domains, our framework uses nonregular approximation domains to build the cross-parameterization. Once the nonregular approximation domains are constructed for 3D models, different (and complex) input shapes are transformed into similar (and simple) shapes, thus facilitating the cross-parameterization process. Specifically, a novel nonregular domain, the convex hull, is adopted to build shape correspondence. We first construct convex hulls for each part of the segmented model, and then adopt our convex-hull cross-parameterization method to generate compatible meshes. Our method exploits properties of the convex hull, e.g., good approximation ability and linear convex representation for interior vertices. After building an initial cross-parameterization via convex-hull domains, we use compatible remeshing algorithms to achieve an accurate approximation of the target geometry and to ensure a complete surface matching. Experimental results show that the compatible meshes constructed are well suited for shape blending and other geometric applications.
Faster Parameterized Algorithms for Minor Containment
NASA Astrophysics Data System (ADS)
Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.
The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.
NASA Astrophysics Data System (ADS)
Ouwersloot, H. G.; van Stratum, B. J.; Vila-Guerau Arellano, J.; Sikma, M.; Krol, M. C.; Lelieveld, J.
2013-12-01
We investigate the vertical transport of moisture and atmospheric chemical reactants from the sub-cloud layer to the cumulus cloud layer related to the kinematic mass flux that is driven by shallow convection over land. The dynamical and chemical assumptions needed for mesoscale and global chemistry-transport model parameterizations are systematically analysed using numerical experiments performed by a Large-Eddy Simulation (LES) model. First, we identify and discuss the four primary feedback mechanisms between sub-cloud layer dynamics and mass-flux transport by shallow cumulus clouds for typical mid-latitude conditions. These feedbacks involve mixed-layer drying and heating, changing the moisture variability at the sub-cloud layer top and adjusting entrainment. Based on this analysis and LES experiments, we design parameterizations for cloud properties and mass-flux transport of air and moisture that can be applied to large-scale models. As an intermediate step, we incorporate the parameterizations in a conceptual mixed-layer model, which enables us to study these interplays in more detail. By comparing the results of this model with LES case studies, we show for a wide range of conditions that the new parameterizations enable the model to reproduce the sub-cloud layer dynamics and the four aforementioned feedbacks. However, by considering heterogeneous sensible and latent heat fluxes at the surface, we demonstrate that the parameterizations are sensitive to specific boundary conditions due to changes in the boundary-layer dynamics. Second, we extend the investigation to determine whether the parameterizations are suitable for tropical conditions and to represent the transport of reactants. The numerical experiments in this analysis are inspired by observations over the Amazon during the dry season. Isoprene, a key atmospheric compound over the tropical rain forest, decreases by 8.5 % hr-1 on average and 15 % hr-1 at maximum due to mass-flux induced removal. The
2013-09-30
Seasonal Prediction: An LES/ SCM Parameterization Test-Bed Joao Teixeira Jet Propulsion Laboratory California Institute of Technology, MS 169-237...a Single Column Model ( SCM ) version of the latest operational NAVGEM that can be used to simulate GEWEX Cloud Systems Study (GCSS) case-studies; ii...use the NAVGEM SCM and the LES model as a parameterization test-bed. APPROACH It is well accepted that sub-grid physical processes such as
NASA Astrophysics Data System (ADS)
Becker, Tobias; Stevens, Bjorn; Hohenegger, Cathy
2017-06-01
Radiative-convective equilibrium simulations with the general circulation model ECHAM6 are used to explore to what extent the dependence of large-scale convective self-aggregation on sea-surface temperature (SST) is driven by the convective parameterization. Within the convective parameterization, we concentrate on the entrainment parameter and show that large-scale convective self-aggregation is independent of SST when the entrainment rate for deep convection is set to zero or when the convective parameterization is removed from the model. In the former case, convection always aggregates very weakly, whereas in the latter case, convection always aggregates very strongly. With a nontrivial representation of convective entrainment, large-scale convective self-aggregation depends nonmonotonically on SST. For SSTs below 295 K, convection is more aggregated the smaller the SST because large-scale moisture convergence is relatively small, constraining convective activity to regions with high wind-induced surface moisture fluxes. For SSTs above 295 K, convection is more aggregated the higher the SST because entrainment is most efficient in decreasing updraft buoyancy at high SSTs, amplifying the moisture-convection feedback. When halving the entrainment rate, convection is less efficient in reducing updraft buoyancy, and convection is less aggregated, in particular at high SSTs. Despite most early work on self-aggregation highlighted the role of nonconvective processes, we conclude that convective self-aggregation and the global climate state are sensitive to the convective parameterization.
Paluszkiewicz, T.; Hibler, L.F.; Romea, R.D.
1995-01-01
The current generation of ocean general circulation models (OGCMS) uses a convective adjustment scheme to remove static instabilities and to parameterize shallow and deep convection. In simulations used to examine climate-related scenarios, investigators found that in the Arctic regions, the OGCM simulations did not produce a realistic vertical density structure, did not create the correct quantity of deep water, and did not use a time-scale of adjustment that is in agreement with tracer ages or observations. A possible weakness of the models is that the convective adjustment scheme does not represent the process of deep convection adequately. Consequently, a penetrative plume mixing scheme has been developed to parameterize the process of deep open-ocean convection in OGCMS. This new deep convection parameterization was incorporated into the Semtner and Chervin (1988) OGCM. The modified model (with the new parameterization) was run in a simplified Nordic Seas test basin: under a cyclonic wind stress and cooling, stratification of the basin-scale gyre is eroded and deep mixing occurs in the center of the gyre. In contrast, in the OGCM experiment that uses the standard convective adjustment algorithm, mixing is delayed and is wide-spread over the gyre.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
NASA Astrophysics Data System (ADS)
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions
A hybrid wind farm parameterization for mesoscale and climate models
NASA Astrophysics Data System (ADS)
Pan, Y.; Archer, C. L.
2016-12-01
To better understand the potential impacts of wind farms on weather and climate at the local to regional scale, a new hybrid wind farm parameterization is proposed here for mesoscale models, such as the Weather Research and Forecasting Model (WRF), or climate models, such as the Community Atmosphere Model (CAM). All previous wind farm parameterizations treat all the wind turbines in the same grid cell as identical (i.e., they all share the same upstream wind velocity) and ignore the effect of wind direction. By contrast, the new hybrid model considers each individual wind turbine, based on its position in the layout and on wind direction. The new parameterization is developed starting from large eddy simulations (LES) of existing wind farms, in which the local flow around each wind turbine is directly simulated at high spatial ( 3.5 m) and temporal ( 0.1 s) resolutions and the effects of subgrid-scale processes are modeled. Based on analytic and statistical relationships between the LES results and several geometric properties of the wind farm layout (such as blockage ratio and blocking distance), the new hybrid parameterization predicts the local upstream wind speed of each individual wind turbine in the same grid cell, and thus successfully account for the effects of layout and wind direction with little computational cost. With the newly predicted upstream velocity, the turbine-induced forces and added turbulence kinetic energy (TKE) in the atmosphere are derived analytically. The wind speed, wind speed deficit, and TKE profiles and power production obtained with the hybrid parameterization for the test case (the 48-turbine Lillgrund wind farm in Sweden) are in better agreement with the LES results than previous parameterizations. Future work includes the insertion of the hybrid parameterization into the WRF code to assess impacts on near-surface properties, such as temperature and heat and momentum fluxes, in the region surrounding the wind farm.
NASA Astrophysics Data System (ADS)
Gladish, James C.; Duncan, Donald D.
2016-05-01
Liquid crystal variable retarders (LCVRs) are computer-controlled birefringent devices that contain nanometer-sized birefringent liquid crystals (LCs). These devices impart retardance effects through a global, uniform orientation change of the LCs, which is based on a user-defined drive voltage input. In other words, the LC structural organization dictates the device functionality. The LC structural organization also produces a spectral scatter component which exhibits an inverse power law dependence. We investigate LC structural organization by measuring the voltage-dependent LC spectral scattering signature with an integrating sphere and then relate this observable to a fractal-Born model based on the Born approximation and a Von Kármán spectrum. We obtain LCVR light scattering spectra at various drive voltages (i.e., different LC orientations) and then parameterize LCVR structural organization with voltage-dependent correlation lengths. The results can aid in determining performance characteristics of systems using LCVRs and can provide insight into interpreting structural organization measurements.
Optimal Aerosol Parameterization for Remote Sensing Retrievals
NASA Technical Reports Server (NTRS)
Newchurch, Michael J.
2004-01-01
discrepancy in the lower stratosphere is attributable to natural variation, and is also seen in comparisons between lidar and ozonesonde measurements. NO2 profiles obtained with our algorithm were compared to those obtained through the SAGE III operational algorithm and exhibited differences of 20 - 40%. Our retrieved profiles agree with the HALOE NO2 measurements significantly better than those of the operational retrieval. In other work (described below), we are extending our aerosol retrievals into the infrared regime and plan to perform retrievals from combined uv-visible-infrared spectra. This work will allow us to use the spectra to derive the size and composition of aerosols, and we plan to employ our algorithms in the analysis of PSC spectra. We are presently also developing a limb-scattering algorithm to retrieve aerosol data from limb measurements of solar scattered radiation.
Parameterization of 3D brain structures for statistical shape analysis
NASA Astrophysics Data System (ADS)
Zhu, Litao; Jiang, Tianzi
2004-05-01
Statistical Shape Analysis (SSA) is a powerful tool for noninvasive studies of pathophysiology and diagnosis of brain diseases. It also provides a shape constraint for the segmentation of brain structures. There are two key problems in SSA: the representation of shapes and their alignments. The widely used parameterized representations are obtained by preserving angles or areas and the alignments of shapes are achieved by rotating parameter net. However, representations preserving angles or areas do not really guarantee the anatomical correspondence of brain structures. In this paper, we incorporate shape-based landmarks into parameterization of banana-like 3D brain structures to address this problem. Firstly, we get the triangulated surface of the object and extract two landmarks from the mesh, i.e. the ends of the banana-like object. Then the surface is parameterized by creating a continuous and bijective mapping from the surface to a spherical surface based on a heat conduction model. The correspondence of shapes is achieved by mapping the two landmarks to the north and south poles of the sphere and using an extracted origin orientation to select the dateline during parameterization. We apply our approach to the parameterization of lateral ventricle and a multi-resolution shape representation is obtained by using the Discrete Fourier Transform.
Compositional space parameterization for general multi-component multiphase systems
NASA Astrophysics Data System (ADS)
Voskov, Denis; Tchelepi, Hamdi
2007-11-01
We present a general parameterization of the thermodynamic behavior of multiphase, multi-component systems. The phase behavior in the compositional space is represented using a low dimensional tie-simplex parameterization. This parameterization improves the robustness of the phase behavior representation as well as the efficiency of various types of compositional computations. We demonstrate this Compositional Space Parameterization (CSP) framework for large-scale compositional reservoir simulation. In the standard compositional simulation approach, an Equation of State (EoS) is used to detect the phase state and calculate the phase compositions, if needed. These EoS computations can dominate the overall simulation cost. We compare our adaptive CSP approach with standard EoS based simulation for several challenging problems of practical interest. The comparisons indicate that the CSP strategy is more robust, and computational efficient. Another type of applications is an equilibrium flash calculation of systems with a large number of phases. The complexity and strong nonlinear behaviors associated with such problems pose serious difficulties for standard techniques. Here, we describe an effective tie-simplex parameterization for such systems at a fixed pressure and temperature. The preprocessed data can be used in conventional EoS based calculations as an initial guess to accelerate convergence.
Meshless thin-shell simulation based on global conformal parameterization.
Guo, Xiaohu; Li, Xin; Bao, Yunfan; Gu, Xianfeng; Qin, Hong
2006-01-01
This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via Moving Least Squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields.
Land surface evaporation: Measurement and parameterization
Schmugge, T.; Andre, J.C.
1991-01-01
This book, which largely addresses issues suggested by its title, is based on papers presented at a workshop at Banyuls, France. This is one of the better books of its type. There is a strong emphasis on the role of land-surface evaporation in connection to the atmospheric and hydrological components of the climate system. The chapters are all well written and complement each other over a wide range of topics. Strong editing is evident, however, the individual chapters have not been closely integrated beyond adequate cross referencing. A variety of subject matter common to many of the chapters is briefly but redundantly introduced in several individual chapters rather than treated with enough explanation in one place for a beginning student to learn it. This was particularly evident in the various cursory introductions to the Monin-Obukhov similarity theory scattered throughout the book. Also, it is easy to find technical terms that go undefined-for example, mesoscale alpha and beta. Thus, the audience that will be served is advanced graduate students and professional who are looking for good general reviews of the current status of the materials treated.
Scattering in Quantum Lattice Gases
NASA Astrophysics Data System (ADS)
O'Hara, Andrew; Love, Peter
2009-03-01
Quantum Lattice Gas Automata (QLGA) are of interest for their use in simulating quantum mechanics on both classical and quantum computers. QLGAs are an extension of classical Lattice Gas Automata where the constraint of unitary evolution is added. In the late 1990s, David A. Meyer as well as Bruce Boghosian and Washington Taylor produced similar models of QLGAs. We start by presenting a unified version of these models and study them from the point of view of the physics of wave-packet scattering. We show that the Meyer and Boghosian-Taylor models are actually the same basic model with slightly different parameterizations and limits. We then implement these models computationally using the Python programming language and show that QLGAs are able to replicate the analytic results of quantum mechanics (for example reflected and transmitted amplitudes for step potentials and the Klein paradox).
Cloud-radiation interactions and their parameterization in climate models
1994-11-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18--20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the. themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth`s surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Cloud-radiation interactions and their parameterization in climate models
NASA Technical Reports Server (NTRS)
1994-01-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Parameterized reduced-order models using hyper-dual numbers.
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.
Development of a hybrid cloud parameterization for general circulation models
Kao, C.Y.J.; Kristjansson, J.E.; Langley, D.L.
1995-04-01
We have developed a cloud package with state-of-the-art physical schemes that can parameterize low-level stratus or stratocumulus, penetrative cumulus, and high-level cirrus. Such parameterizations will improve cloud simulations in general circulation models (GCMs). The principal tool in this development comprises the physically based Arakawa-Schubert scheme for convective clouds and the Sundqvist scheme for layered, nonconvective clouds. The term {open_quotes}hybrid{close_quotes} addresses the fact that the generation of high-attitude layered clouds can be associated with preexisting convective clouds. Overall, the cloud parameterization package developed should better determine cloud heating and drying effects in the thermodynamic budget, realistic precipitation patterns, cloud coverage and liquid/ice water content for radiation purposes, and the cloud-induced transport and turbulent diffusion for atmospheric trace gases.
Parameterization of and Brine Storage in MOR Hydrothermal Systems
NASA Astrophysics Data System (ADS)
Hoover, J.; Lowell, R. P.; Cummings, K. B.
2009-12-01
Single-pass parameterized models of high-temperature hydrothermal systems at oceanic spreading centers use observational constraints such as vent temperature, heat output, vent field area, and the area of heat extraction from the sub-axial magma chamber to deduce fundamental hydrothermal parameters such as total mass flux Q, bulk permeability k, and the thickness of the conductive boundary layer at the base of the system, δ. Of the more than 300 known systems, constraining data are available for less than 10%. Here we use the single pass model to estimate Q, k, and δ for all the seafloor hydrothermal systems for which the constraining data are available. Mean values of Q, k, and δ are 170 kg/s, 5.0x10-13 m2, and 20 m, respectively; which is similar to results obtained from the generic model. There is no apparent correlation with spreading rate. Using observed vent field lifetimes, the rate of magma replenishment can also be calculated. Essentially all high-temperature hydrothermal systems at oceanic spreading centers undergo phase separation, yielding a low chlorinity vapor and a high salinity brine. Some systems such as the Main Endeavour Field on the Juan de Fuca Ridge and the 9°50’N sites on the East Pacific Rise vent low chlorinity vapor for many years, while the high density brine remains sequestered beneath the seafloor. In an attempt to further understand the brine storage at the EPR, we used the mass flux Q determined above, time series of vent salinity and temperature, and the depth of the magma chamber to determine the rate of brine production at depth. We found thicknesses ranging from 0.32 meters to ~57 meters over a 1 km2 area from 1994-2002. These calculations suggest that brine maybe being stored within the conductive boundary layer without a need for lateral transport or removal by other means. We plan to use the numerical code FISHES to further test this idea.
Waves and Instabilities for Model Tropical Convective Parameterizations.
NASA Astrophysics Data System (ADS)
Majda, Andrew J.; Shefter, Michael G.
2001-04-01
Models of the tropical atmosphere with crude vertical resolution are important as intermediate models for understanding convectively coupled wave hierarchies and also as simplified models for studying various strategies for parameterizing convection and convectively coupled waves. Simplified models are utilized in a detailed analytical study of the waves and instabilities for model convective parameterizations. Three convection schemes are analyzed: a strict quasi-equilibrium (QE) scheme and two schemes that attempt to model the departures from quasi equilibrium by including the shorter timescale effects of penetrative convection, the Lagrangian parcel adjustment (LPA) scheme and a new instantaneous convective available potential energy (CAPE) adjustment (ICAPE) scheme. Unlike the QE parameterization scheme, both the LPA and ICAPE schemes have scale-selective finite bands of unstable wavelengths centered around typical cluster and supercluster scales with virtually identical growth rates and wave structure. However, the LPA scheme has, in addition, two nonphysical superfast parasitic waves that are artifacts of this parameterization while such waves are completely absent in the new ICAPE parameterization.Another topic studied here is the fashion in which an imposed barotropic mean wind triggers a transition to instability in the Tropics through suitable convectively coupled waves; this is the simplest analytical problem for studying the influence of midlatitudes on convectively coupled waves. For an easterly barotropic mean flow with the effect of rotation included, both supercluster-scale moist Kelvin waves and cluster-scale moist mixed Rossby-gravity waves participate in the transition to instability. The wave and stability properties of the ICAPE parameterization with rotation are studied through a novel procedure involving complete zonal resolution but low-order meridional truncation. Besides moist Kelvin, mixed Rossby-gravity, and equatorial Rossby waves, this
Evaluating gas transfer velocity parameterizations using upper ocean radon distributions
NASA Astrophysics Data System (ADS)
Bender, Michael L.; Kinter, Saul; Cassar, Nicolas; Wanninkhof, Rik
2011-02-01
Sea-air fluxes of gases are commonly calculated from the product of the gas transfer velocity (k) and the departure of the seawater concentration from atmospheric equilibrium. Gas transfer velocities, generally parameterized in terms of wind speed, continue to have considerable uncertainties, partly because of limited field data. Here we evaluate commonly used gas transfer parameterizations using a historical data set of 222Rn measurements at 105 stations occupied on Eltanin cruises and the Geosecs program. We make this evaluation with wind speed estimates from meteorological reanalysis products (from National Centers for Environmental Prediction and European Centre for Medium-Range Weather Forecasting) that were not available when the 22Rn data were originally published. We calculate gas transfer velocities from the parameterizations by taking into account winds in the period prior to the date that 222Rn profiles were sampled. Invoking prior wind speed histories leads to much better agreement than simply calculating parameterized gas transfer velocities from wind speeds on the day of sample collection. For individual samples from the Atlantic Ocean, where reanalyzed winds agree best with observations, three similar recent parameterizations give k values for individual stations with an rms difference of ˜40% from values calculated using 222Rn data. Agreement of basin averages is much better. For the global data set, the average difference between k constrained by 222Rn and calculated from the various parameterizations ranges from -0.2 to +0.9 m/d (average, 2.9 m/d). Averaging over large domains, and working with gas data collected in recent years when reanalyzed winds are more accurate, will further decrease the uncertainties in sea-air fluxes.
Beyond Super-Parameterization: Multiresolutional Analysis Approach: NAM- SCA
NASA Astrophysics Data System (ADS)
Yano, J.
2008-12-01
The use of CSRM in place of conventional parameterizations, such as in super-parameterization, tends to give a misleading impression that the parameterization problem is resolved in the manner. However, the present session emphasizes that CSRM itself is built up on various subgrid-scale parameterizations. Thus we should move "beyond" super-parameterization by seeking methodologies (not necessarily parameterization) for correctly and more efficiently representing complex atmospheric processes of smaller and smaller scales. In order to advance towards this goal, we propose the approach of NAM-SCA: Nonhydrostatic Anelastic Model under Segmentally-Constant Approximation. The idea for this model is inspired from various different sources. First of all, a branch of mathematics called the multiresolutional analysis provides a philosophical basis for pursuing this possibility: in the same sense as wavelet can extensively compress an image, the multiresolutional analysis provides extensive possibilities for compressing numerical models. Application of this principle into practice leads to a very flexible time-dependent mesh refinement or nesting, far more extensively than conventional approaches could provide. A "deconstruction" analysis of the mass flux convective analysis, on the other hand, reveals that the mass flux decomposition itself can be used for this purpose: NAM (or CSRM) is simply decomposed into an ensemble of mass flux modes, purely as a geometrical representation, under a spirit of multiresolutional analysis, but without any further approximations. We call this representation as SCA due to its geometrical constraint. NAM-SCA can run much efficiently than conventional CSRM by adopting high resolutions only where they are required, and potentially it can achieve a much higher resolution than the current CSRM can achieve. A two-dimensional version will be presented, which is already ready for operational implementation.
NASA Astrophysics Data System (ADS)
Hayek, Mohamed; Ackerer, Philippe; Sonnendrücker, Éric
2009-02-01
We propose a new refinement indicator (NRI) for adaptive parameterization to determine the diffusion coefficient in an elliptic equation in two-dimensional space. The diffusion coefficient is assumed to be a piecewise constant space function. The unknowns are both the parameter values and the zonation. Refinement indicators are used to localize parameter discontinuities in order to construct iteratively the zonation (parameterization). The refinement indicator is obtained usually by using the first-order effect on the objective function of removing degrees of freedom for a current set of parameters. In this work, in order to reduce the computation costs, we propose a new refinement indicator based on the second-order effect on the objective function. This new refinement indicator depends on the objective function, and its first and second derivatives with respect to the parameter constraints. Numerical experiments show the high efficiency of the new refinement indicator compared to the standard one.
On-Line Construction of Parameterized Suffix Trees
NASA Astrophysics Data System (ADS)
Lee, Taehyung; Na, Joong Chae; Park, Kunsoo
We consider on-line construction of a suffix tree for a parameterized string, where we always have the suffix tree of the input string read so far. This situation often arises from source code management systems where, for example, a source code repository is gradually increasing in its size as users commit new codes into the repository day by day. We present an on-line algorithm which constructs a parameterized suffix tree in randomized O(n) time, where n is the length of the input string. Our algorithm is the first randomized linear time algorithm for the on-line construction problem.
A parameterization for downward longwave radiation from satellite meteorological data
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Darnell, W. L.; Staylor, W. F.
1983-01-01
An accurate and efficient parameterization is developed which computes clear-atmosphere downward longwave flux in terms of the water vapor burden and water-vapor-weighted average temperature of the surface-700 mb region. The parameterization is based on detailed radiative transfer calculations performed for a wide meteorological data base, consisting of 180 atmospheric models generated from climatological data and 106 radiosonde-measured models. The standard error of the computed fluxes was found to be about 4 percent of the mean value. It is also pointed out that strong heating or cooling of the surface, as shown by many TOVS data sets, caused larger errors in the computed fluxes.
A 3-Component Inverse Depth Parameterization for Particle Filter SLAM
NASA Astrophysics Data System (ADS)
Imre, Evren; Berger, Marie-Odile
The non-Gaussianity of the depth estimate uncertainty degrades the performance of monocular extended Kalman filter SLAM (EKF-SLAM) systems employing a 3-component Cartesian landmark parameterization, especially in low-parallax configurations. Even particle filter SLAM (PF-SLAM) approaches are affected, as they utilize EKF for estimating the map. The inverse depth parameterization (IDP) alleviates this problem through a redundant representation, but at the price of increased computational complexity. The authors show that such a redundancy does not exist in PF-SLAM, hence the performance advantage of the IDP comes almost without an increase in the computational cost.
NASA Astrophysics Data System (ADS)
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.
2014-04-01
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they
Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying
2015-01-01
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they
Formulation structure of the mass-flux convection parameterization
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2014-09-01
Structure of the mass-flux convection parameterization formulation is re-examined. Many of the equations associated with this formulation are derived in systematic manner with various intermediate steps explicitly presented. The nonhydrostatic anelastic model (NAM) is taken as a starting point of all the derivations. Segmentally constant approximation (SCA) is a basic geometrical constraint imposed on a full system (e.g., NAM) as a first step for deriving the mass-flux formulation. The standard mass-flux convection parameterization, as originally formulated by Ooyama, Fraedrich, Arakawa and Schubert, is re-derived under the two additional hypotheses concerning entrainment-detrainment and environment, and an asymptotic limit of vanishing areas occupied by convection. A model derived at each step of the deduction constitutes a stand-alone subgrid-scale representation by itself, leading to a hierarchy of subgrid-scale schemes. A backward tracing of this deduction process provides paths for generalizing mass-flux convection parameterization. Issues of the high-resolution limit for parameterization are also understood as those of relaxing various traditional constraints. The generalization presented herein can include various other subgrid-scale processes under a mass-flux framework.
Parameterization of Integrated Aerosol Effects in Marine Stratocumulus Clouds
2007-09-30
turbulence. 5 IMPACT The improved parameterization of the physical processes in marine stratocumulus clouds will lead to more accurate numerical weather ... predictions for Navy operations. The new retrieval algorithms of cloud and drizzle parameters will allow more accurate initialization of weather
The Gent-McWilliams parameterization: 20/20 hindsight
NASA Astrophysics Data System (ADS)
Gent, Peter R.
It has now been 20 years since the Gent and McWilliams paper on "Isopycnal Mixing in Ocean Circulation Models" was published in January 1990 issue of the Journal of Physical Oceanography. That paper was highlighted at the CLIVAR Working Group on Ocean Model Development "Workshop on Ocean Mesoscale Eddies" which was held at the UK Meteorological Office in April 2009, and this review paper is based on the talk given at that Workshop. It contains some hindsights on how the parameterization of the effect of mesoscale eddies on the mean flow came about; which is a question that I am asked quite often. A few important results from including the parameterization in a non-eddy-resolving ocean model are recalled. Including this parameterization, along with other improvements to all the components, in the first version of the Community Climate System Model resulted in the first non-drifting control simulation in a climate model that did not require flux corrections. Also included are brief comments on how the Gent and McWilliams eddy parameterization has been modified and improved since the original proposal in 1990.
Anisotropic shear dispersion parameterization for ocean eddy transport
NASA Astrophysics Data System (ADS)
Reckinger, Scott; Fox-Kemper, Baylor
2015-11-01
The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
Validation of an Urban Parameterization in a Mesoscale Model
Leach, M.J.; Chin, H.
2001-07-19
The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, Guohui; Zhang, Renyi; Tie, Xuxie; Molina, Luisa
2013-04-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere; hence the representation of the HONO sources in chemical transport models (CTMs) has lack comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, G.; Zhang, R.; Tie, X.; Molina, L. T.
2013-05-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere and thence the representation of the HONO sources in chemical transport models (CTMs) is lack of comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
The Project for Intercomparison of Land-surface Parameterization Schemes
NASA Technical Reports Server (NTRS)
Henderson-Sellers, A.; Yang, Z.-L.; Dickinson, R. E.
1993-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) is described and the first stage science plan outlined. PILPS is a project designed to improve the parameterization of the continental surface, especially the hydrological, energy, momentum, and carbon exchanges with the atmosphere. The PILPS Science Plan incorporates enhanced documentation, comparison, and validation of continental surface parameterization schemes by community participation. Potential participants include code developers, code users, and those who can provide datasets for validation and who have expertise of value in this exercise. PILPS is an important activity because existing intercomparisons, although piecemeal, demonstrate that there are significant differences in the formulation of individual processes in the available land surface schemes. These differences are comparable to other recognized differences among current global climate models such as cloud and convection parameterizations. It is also clear that too few sensitivity studies have been undertaken with the result that there is not yet enough information to indicate which simplifications or omissions are important for the near-surface continental climate, hydrology, and biogeochemistry. PILPS emphasizes sensitivity studies with and intercomparisons of existing land surface codes and the development of areally extensive datasets for their testing and validation.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...
A new framework for parameterization of heterogeneous ocean convection
NASA Astrophysics Data System (ADS)
Ilicak, M.; Adcroft, A.; Legg, S.
2014-12-01
We propose a new framework for parameterization of ocean convection processes. The new framework is termed ''patchy convection'' since our aim is to represent the heterogeneity of mixing processes that take place within the horizontal scope of a grid cell. We focus on applying this new scheme to represent the effect of pre-conditioning for deep convection by subgrid scale eddy variability. The new scheme relies on mesoscale eddy kinetic energy field. We illustrate the patchy parameterization using a 1D idealized convection case. Next, the scheme is compared against observations. We employed the 1D case using the summer time ARGO floats from the Labrador Sea as initial conditions. We used ECMWF reanalysis atmospheric forcing and compared our results to the winter time ARGO floats. Finally we evaluate the scheme in two different global ocean-ice simulations with prescribed atmospheric forcing (CORE-I); (i) diagnosed eddy velocity field applied only in the Labrador Sea (ii) diagnosed global eddy velocity field. The global simulation results indicate that the patchy convection scheme improves the warm biases in the deep Atlantic Ocean and Southern Ocean. This proof-of-concept study is a first step in developing the patchy parameterization scheme, which will be extended in future to use a prognostic eddy field as well as to parameterize convection due to under-ice brine rejection. This study is funded through the CPT 2: Ocean Mixing Processes Associated with High Spatial Heterogeneity in Sea Ice and the Implications for Climate Models.
A new for parameterization of heterogeneous ocean convection
NASA Astrophysics Data System (ADS)
Ilicak, Mehmet; Adcroft, Alistair; Legg, Sonya
2015-04-01
We propose a new framework for parameterization of ocean convection processes. The new framework is termed patchy convection. Our aim is to represent the heterogeneity of mixing processes that take place within the horizontal scope of a grid cell. This new scheme is to represent the effect of preconditioning for deep convection by sub-grid scale eddy variability. The new parameterization separates the grid-cell into two regions of different stratification, applies convective mixing separately to each region, and then recombines the density profile to produce the grid-cell mean density profile. The scheme depends on two parameters: the areal fraction of the vertically-mixed region within the horizontal grid cell, and the density difference between the mean and the unstratified profiles at the surface. We parameterize this density difference in terms of an unresolved eddy kinetic energy. We illustrate the patchy parameterization using a 1D idealized convection case before evaluating the scheme in two different global ocean-ice simulations with prescribed atmospheric forcing; i) diagnosed eddy velocity field applied only in the Labrador Sea ii) diagnosed global eddy velocity field. The global simulation results indicate that the patchy convection scheme improves the warm biases in the deep Atlantic Ocean and Southern Ocean.
Overview of an Urban Canopy Parameterization in COAMPS
Leach, M J; Chin, H S
2006-02-09
The Coupled Atmosphere/Ocean Mesoscale Prediction System (COAMPS) model (Hodur, 1997) was developed at the Naval Research Laboratory. COAMPS has been used at resolutions as small as 2 km to study the role of complex topography in generating mesoscale circulation (Doyle, 1997). The model has been adapted for use in the Atmospheric Science Division at LLNL for both research and operational use. The model is a fully, non-hydrostatic model with several options for turbulence parameterization, cloud processes and radiative transfer. We have recently modified the COAMPS code to include building and other urban surfaces effects in the mesoscale model by incorporating an urban canopy parameterization (UCP) (Chin et al., 2005). This UCP is a modification of the original parameterization of (Brown and Williams, 1998), based on Yamada's (1982) forest canopy parameterization and includes modification of the TKE and mean momentum equations, modification of radiative transfer, and an anthropogenic heat source. COAMPS is parallelized for both shared memory (OpenMP) and distributed memory (MPI) architecture.
CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW.
LIU,Y.; DAUM,P.H.; CHAI,S.K.; LIU,F.
2002-02-12
This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...
Cohesive Sediment Entrainment Rate Functions: Expanding and Quantifying their Parameterizations
2008-04-03
biostabilization on entrainment (de Brouwer et al., 2000; Droppo et al. 2001). The LISP project (Littoral Investigation of Sediment Properties) (Daborn...and exponential entrainment rate functions was performed in order to demonstrate the necessity for physics based parameterization schemes . It was
Parameterization of Movement Execution in Children with Developmental Coordination Disorder
ERIC Educational Resources Information Center
Van Waelvelde, Hilde; De Weerdt, Willy; De Cock, Paul; Janssens, Luc; Feys, Hilde; Engelsman, Bouwien C. M. Smits
2006-01-01
The Rhythmic Movement Test (RMT) evaluates temporal and amplitude parameterization and fluency of movement execution in a series of rhythmic arm movements under different sensory conditions. The RMT was used in combination with a jumping and a drawing task, to evaluate 36 children with Developmental Coordination Disorder (DCD) and a matched…
Authalic parameterization of general surfaces using Lie advection.
Zou, Guangyu; Hu, Jiaxi; Gu, Xianfeng; Hua, Jing
2011-12-01
Parameterization of complex surfaces constitutes a major means of visualizing highly convoluted geometric structures as well as other properties associated with the surface. It also enables users with the ability to navigate, orient, and focus on regions of interest within a global view and overcome the occlusions to inner concavities. In this paper, we propose a novel area-preserving surface parameterization method which is rigorous in theory, moderate in computation, yet easily extendable to surfaces of non-disc and closed-boundary topologies. Starting from the distortion induced by an initial parameterization, an area restoring diffeomorphic flow is constructed as a Lie advection of differential 2-forms along the manifold, which yields equality of the area elements between the domain and the original surface at its final state. Existence and uniqueness of result are assured through an analytical derivation. Based upon a triangulated surface representation, we also present an efficient algorithm in line with discrete differential modeling. As an exemplar application, the utilization of this method for the effective visualization of brain cortical imaging modalities is presented. Compared with conformal methods, our method can reveal more subtle surface patterns in a quantitative manner. It, therefore, provides a competitive alternative to the existing parameterization techniques for better surface-based analysis in various scenarios.
Compositional Space Parameterization Approach for Reservoir Flow Simulation
NASA Astrophysics Data System (ADS)
Voskov, D.
2011-12-01
Phase equilibrium calculations are the most challenging part of a compositional flow simulation. For every gridblock and at every time step, the number of phases and their compositions must be computed for the given overall composition, temperature, and pressure conditions. The conventional approach used in petroleum industry is based on performing a phase-stability test, and solving the fugacity constraints together with the coupled nonlinear flow equations when the gridblock has more than one phase. The multi-phase compositional space can be parameterized in terms of tie-simplexes. For example, a tie-triangle can be used such that its interior encloses the three-phase region, and the edges represent the boundary with specific two-phase regions. The tie-simplex parameterization can be performed for pressure, temperature, and overall composition. The challenge is that all of these parameters can change considerably during the course of a simulation. It is possible to prove that the tie-simplexes change continuously with respect to pressure, temperature, and overall composition. The continuity of the tie-simplex parameterization allows for interpolation using discrete representations of the tie-simplex space. For variations of composition, a projection to the nearest tie-simplex is used, and if the tie-simplex is within a predefined tolerance, it can be used directly to identify the phase-state of this composition. In general, our parameterization approach can be seen as the generalization of negative flash idea for systems with two or more phases. Theory of dispersion-free compositional displacements, as well as computational experience of general-purpose compositional flow simulation indicates that the displacement path in compositional space is determined by a limited number of tie-simplexes. Therefore, only few tie-simplex tables are required to parameterize the entire displacement. The small number of tie-simplexes needed in a course of a simulation motivates
Spline Parameterization of Complex Planar Domains for Isogeometric Analysis
NASA Astrophysics Data System (ADS)
Gondegaon, Sangamesh; Voruganti, Hari K.
2017-03-01
Isogeometric Analysis (IGA) involves unification of modelling and analysis by adopting the same basis functions (splines), for both. Hence, spline based parametric model is the starting step for IGA. Representing a complex domain, using parametric geometric model is a challenging task. Parameterization problem can be defined as, finding an optimal set of control points of a B-spline model for exact domain modelling. Also, the quality of parameterization, too has significant effect on IGA. Finding the B-spline control points for any given domain, which gives accurate results is still an open issue. In this paper, a new planar B-spline parameterization technique, based on domain mapping method is proposed. First step of the methodology is to map an input (non-convex) domain onto a unit circle (convex) with the use of harmonic functions. The unique properties of harmonic functions: global minima and mean value property, ensures the mapping is bi-jective and with no self-intersections. Next step is to map the unit circle to unit square to make it apt for B-spline modelling. Square domain is re-parameterized by using conventional centripetal method. Once the domain is properly parameterized, the required control points are computed by solving the B-spline tensor product equation. The proposed methodology is validated by applying the developed B-spline model for a static structural analysis of a plate, using isogeometric analysis. Different domains are modelled to show effectiveness of the given technique. It is observed that the proposed method is versatile and computationally efficient.
NASA Astrophysics Data System (ADS)
Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.
2014-12-01
Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
Electron scattering and mobility in a quantum well heterolayer
NASA Astrophysics Data System (ADS)
Arora, Vijay K.; Naeem, Athar
1984-11-01
The theory of electron-lattice scattering is analyzed for a quantum-well heterolayer under the conditions that the de Broglie wavelength of an electron is comparable to or larger than the width of the layer, and donor impurities are removed in an adjacent nonconducting layer. The mobility due to isotropic scattering by acoustic phonons, point defects, and alloy scattering is found to increase whereas that due to polar-optic phon scattering is found to decrease with increasing thickness.
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter
... diseases Tickborne diseases abroad Borrelia miyamotoi Borrelia mayonii Mobile Application Tick Removal Recommend on Facebook Tweet Share Compartir If you find a tick attached to your skin, there's no need to panic. There are several tick removal devices on the market, but a plain set of ...
eblur/dust: a modular python approach for dust extinction and scattering
NASA Astrophysics Data System (ADS)
Corrales, Lia
2016-03-01
I will present a library of python codes -- github.com/eblur/dust -- which calculate dust scattering and extinction properties from the IR to the X-ray. The modular interface allows for custom defined dust grain size distributions, optical constants, and scattering physics. These codes are currently undergoing a major overhaul to include multiple scattering effects, parallel processing, parameterized grain size distributions beyond power law, and optical constants for different grain compositions. I use eblur/dust primarily to study dust scattering images in the X-ray, but they may be extended to applications at other wavelengths.
A wave-based model for the marginal ice zone including a floe breaking parameterization
NASA Astrophysics Data System (ADS)
Dumont, D.; Kohout, A.; Bertino, L.
2011-04-01
The marginal ice zone (MIZ) is the boundary between the open ocean and ice-covered seas, where sea ice is significantly affected by the onslaught of ocean waves. Waves are responsible for the breakup of ice floes and determine the extent of the MIZ and floe size distribution. When the ice cover is highly fragmented, its behavior is qualitatively different from that of pack ice with large floes. Therefore, it is important to incorporate wave-ice interactions into sea ice-ocean models. In order to achieve this goal, two effects are considered: the role of sea ice as a dampener of wave energy and the wave-induced breakup of ice floes. These two processes act in concert to modify the incident wave spectrum and determine the main properties of the MIZ. A simple but novel parameterization for floe breaking is derived by considering alternatively ice as a flexible and rigid material and by using current estimates of ice critical flexural strain and strength. This parameterization is combined with a wave scattering model in a one-dimensional numerical framework to evaluate the floe size distribution and the extent of the MIZ. The model predicts a sharp transition between fragmented sea ice and the central pack, thus providing a natural definition for the MIZ. Reasonable values are found for the extent of the MIZ given realistic initial and boundary conditions. The numerical setting is commensurate with typical ice-ocean models, with the future implementation into two-dimensional sea ice models in mind.
NASA Astrophysics Data System (ADS)
Mitchell, D. L.
2006-12-01
Sometimes deep physical insights can be gained through the comparison of two theories of light scattering. Comparing van de Hulst's anomalous diffraction approximation (ADA) with Mie theory yielded insights on the behavior of the photon tunneling process that resulted in the modified anomalous diffraction approximation (MADA). (Tunneling is the process by which radiation just beyond a particle's physical cross-section may undergo large angle diffraction or absorption, contributing up to 40% of the absorption when wavelength and particle size are comparable.) Although this provided a means of parameterizing the tunneling process in terms of the real index of refraction and size parameter, it did not predict the efficiency of the tunneling process, where an efficiency of 100% is predicted for spheres by Mie theory. This tunneling efficiency, Tf, depends on particle shape and ranges from 0 to 1.0, with 1.0 corresponding to spheres. Similarly, by comparing absorption efficiencies predicted by the Finite Difference Time Domain Method (FDTD) with efficiencies predicted by MADA, Tf was determined for nine different ice particle shapes, including aggregates. This comparison confirmed that Tf is a strong function of ice crystal shape, including the aspect ratio when applicable. Tf was lowest (< 0.36) for aggregates and plates, and largest (> 0.9) for quasi- spherical shapes. A parameterization of Tf was developed in terms of (1) ice particle shape and (2) mean particle size regarding the large mode (D > 70 mm) of the ice particle size distribution. For the small mode, Tf is only a function of ice particle shape. When this Tf parameterization is used in MADA, absorption and extinction efficiency differences between MADA and FDTD are within 14% over the terrestrial wavelength range 3-100 mm for all size distributions and most crystal shapes likely to be found in cirrus clouds. Using hyperspectral radiances, it is demonstrated that Tf can be retrieved from ice clouds. Since Tf
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Parameterized neural networks for high-energy physics
NASA Astrophysics Data System (ADS)
Baldi, Pierre; Cranmer, Kyle; Faucett, Taylor; Sadowski, Peter; Whiteson, Daniel
2016-05-01
We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.
Parameterized Complexity of k-Anonymity: Hardness and Tractability
NASA Astrophysics Data System (ADS)
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri
The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.
Invariant box[endash]parameterization of neutrino oscillations
Weiler, T.J. ); Wagner, D. )
1998-10-01
The model-independent [open quotes]box[close quotes] parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing[endash]matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n[ge]3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. [copyright] [ital 1998 American Institute of Physics.
Invariant box{endash}parameterization of neutrino oscillations
Weiler, T.J.; Wagner, D.
1998-10-01
The model-independent {open_quotes}box{close_quotes} parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing{endash}matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n{ge}3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. {copyright} {ital 1998 American Institute of Physics.}
IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION.
LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.
2005-05-16
A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics.
On Parameterization of the Global Electric Circuit Generators
NASA Astrophysics Data System (ADS)
Slyunyaev, N. N.; Zhidkov, A. A.
2016-08-01
We consider the problem of generator parameterization in the global electric circuit (GEC) models. The relationship between the charge density and external current density distributions inside a thundercloud is studied using a one-dimensional description and a three-dimensional GEC model. It is shown that drastic conductivity variations in the vicinity of the cloud boundaries have a significant impact on the structure of the charge distribution inside the cloud. Certain restrictions on the charge density distribution in a realistic thunderstorm are found. The possibility to allow for conductivity inhomogeneities in the thunderstorm regions by introducing an effective external current density is demonstrated. Replacement of realistic thunderstorms with equivalent current dipoles in the GEC models is substantiated, an equation for the equivalent current is obtained, and the applicability range of this equation is analyzed. Relationships between the main GEC characteristics under variable parameterization of GEC generators are discussed.
Toward Stochastic Parameterization Based on Profiler Measurements of Vertical Velocity
NASA Astrophysics Data System (ADS)
Penland, C.; Koepke, A.; Williams, C. R.
2016-12-01
Parameterizations in General Circulation Models (GCMs) that account for uncertainty due to both unresolved, sub-grid scale processes and errors in assumptions made in the formulation of the parameterization itself are needed to represent the full probability distribution function of resolved processes in the model. In this study, we develop a probabilistic description of vertical velocity based on profiler data collected at Darwin during the time period November 2005 to February 2006. Data collected at one-minute resolution are analyzed at the one-minute, ten-minute and hourly timescales, including fits to the Stochastically-Generated Skew (SGS) distributions. The SGS distributions are associated with linear dynamics, including correlated additive and multiplicative noise. As expected, we find that the stochastic approximation to nonlinear dynamics becomes more appropriate as the timescale is increased by coarse-graining.
Development and Evaluation of a Stochastic Cloud-radiation Parameterization
NASA Astrophysics Data System (ADS)
Veron, D. E.; Secora, J.; Foster, M.
2004-12-01
Previous studies have shown that a stochastic cloud-radiation model accurately represents the domain-averaged shortwave fluxes when compared to observations. Using continuously sampled cloud property observations from the three Atmospheric Radiation Measurement (ARM) Program's Clouds and Radiation Testbed (CART) sites, we run a multiple-layer stochastic model and compare the results to that of the single-layer version of the model used in previous studies. In addition, we compare both to plane parallel model output and independent observations. We will use these results to develop a shortwave cloud-radiation parameterization that will incorporate the influence of the stochastic approach on the calculated radiative fluxes. Initial results using this resulting parameterization in a single-column model will be shown.
An intracloud lightning parameterization scheme for a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.
1992-01-01
The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.
A parameterization of effective soil temperature for microwave emission
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Schmugge, T. J.; Mo, T. (Principal Investigator)
1981-01-01
A parameterization of effective soil temperature is discussed, which when multiplied by the emissivity gives the brightness temperature in terms of surface (T sub o) and deep (T sub infinity) soil temperatures as T = T sub infinity + C (T sub o - T sub infinity). A coherent radiative transfer model and a large data base of observed soil moisture and temperature profiles are used to calculate the best-fit value of the parameter C. For 2.8, 6.0, 11.0, 21.0 and 49.0 cm wavelengths. The C values are respectively 0.802 + or - 0.006, 0.667 + or - 0.008, 0.480 + or - 0.010, 0.246 + or - 0.009, and 0,084 + or - 0.005. The parameterized equation gives results which are generally within one or two percent of the exact values.
Parameterizing the Marine Silicon Cycle: Effects on Modern Ocean Biogeochemistry
NASA Astrophysics Data System (ADS)
Franklin, M. M.; Winguth, A. M.
2008-12-01
The oceanic biogeochemical silicon cycle plays a significant role in the climate system, due to its strong influence on marine planktonic production. Silicic acid, along with other nutrients, controls production of siliceous plankton, which compete with calcareous plankton. Through this competition, and the biological carbon pump, these organisms make an important impact on the carbon cycle. In this study, parameterizations of the marine silicon cycle are added to a biogeochemical model derived from the Ocean Carbon Model Intercomparison Project (OCMIP-2) biotic carbon model. This is coupled to a model from the Parallel Ocean Program (POP), which is the active ocean component of the Community Climate System Model version 3.0. Three parameterizations of the silicon cycle are tested in three different cases. Each case is initiated with observed modern silicic acid concentrations, provided by the World Ocean Atlas (2005), and is integrated for 1000 model years. In all cases, biogenic silica (opal) production is controlled by light and silicic acid concentrations. In case 1, opal production is not limited by any additional nutrients. In case 2, opal production is limited by phosphate, the original limiting nutrient in OCMIP. In case 3, opal production is limited by iron, which has been included as a limiting nutrient within OCMIP for its incorporation in CCSM POP. The parameterization that yields output closest to observed data is used in a fourth case, in which the silicon cycle modulates the carbon cycle. In this formulation, opal production controls production of calcium carbonate, to represent competition between siliceous and calcareous organisms. This impacts the carbon cycle within the model through changes in dissolved inorganic carbon (DIC) and alkalinity. This case is compared to a control run, which does not contain a silicon cycle parameterization, and to observed data.
Parameterization of Outgoing Infrared Radiation Derived from Detailed Radiative Calculations.
NASA Astrophysics Data System (ADS)
Thompson, Starley L.; Warren, Stephen G.
1982-12-01
State-of-the-art radiative transfer models can calculate outgoing infrared (IR) irradiance at the top of the atmosphere (F) to an accuracy suitable for climate modeling given the proper atmospheric profiles of temperature and absorbing gases and aerosols. However, such sophisticated methods are computationally time consuming and ill-suited for simple vertically-averaged models or diagnostic studies. The alternative of empirical expressions for F is plagued by observational uncertainty which forces the functional forms to be very simple. We develop, a parameterization of climatological F by curve-fitting the results of a detailed radiative transfer model. The parameterization comprises clear-sky and cloudy-sky terms. Only two parameters are used to predict clear-sky outgoing IR irradiance: surface air temperature (Ts) and 0-12 km height-mean relative humidity (RH). With this choice of parameters (in particular, the use of RH instead of precipitable water) the outgoing IR irradiance can be estimated without knowledge of the detailed temperature profile or average lapse rate. Comparisons between the clear-sky parameterization and detailed model show maximum errors of 10 W m2 with average errors of only a few watts per square meter. Single-layer `black' clouds are found to reduce the outgoing IR irradiance (relative to clear-sky values) as a function of Ts Tc, Tc and RH, where Tc is the cloud-top temperature. Errors in the parameterization of the cloudy-sky term are comparable to those of the clear-sky term.
Spatial pattern oriented evaluation of a highly parameterized inversion problem
NASA Astrophysics Data System (ADS)
Danapour, M.; Stisen, S.; Højberg, A. L.; Koch, J.; Mendiguren González, G.
2016-12-01
The overall objective of this study is to develop a new model calibration and evaluation framework by combining distributed model parameterization and regularization with new types of objective functions focusing on spatial patterns rather than individual points or catchment scale features. Transient coupled surface-subsurface models are usually complex and contain a large amount of spatio-temporal information. In the traditional calibration approach, model parameters are adjusted against only few spatially aggregated observations of discharge or individual point observations of groundwater head. However, this approach doesn't enable an assessment of spatially explicit predictive model capabilities at the intermediate scale that is relevant for many applications. Pilot Points as an alternative to classical parameterization approaches, introduce greater flexibility when calibrating heterogeneous systems without neglecting expert knowledge. However highly parameterized optimizations of complex distributed hydrological models at catchment scale are challenging due to the computational burden that comes with it. In this study the physically-based coupled surface-subsurface model MIKE SHE is calibrated for the 4,700 km2area of central Jylland (Denmark) that is characterized by heterogeneous geology and considerable groundwater flow across topographical catchment boundaries. The calibration of the distributed conductivity fields is carried out with both a traditional zone based parameterization approach and a pilot point-based approach, implemented using the PEST parameter estimation tool. To enhance the spatial predictability, observation based maps of e.g. groundwater heads, remotely sensed actual evapotranspiration and discharge-rainfall ratio patterns will define the evaluation targets. In order to extract pattern information we will focus on bias-insensitive spatial performance metrics in the evaluation of the model.
Improved CART Data Products and 6cmm Parameterization for Clouds
Kenneth Sassen
2004-08-23
Reviewed here is the history of the participation in the Atmospheric Radiation Measurement (ARM) Program, with particular emphasis on research performed between 1999 and 2002, before the PI moved from the University of Utah to the University of Alaska, Fairbanks. The research results are divided into the following areas: IOP research, remote sensing algorithm development using datasets and models, cirrus cloud and SCM/GCM parameterizations, student training, and publications.
A framework for understanding drag parameterizations for coral reefs
NASA Astrophysics Data System (ADS)
Rosman, Johanna H.; Hench, James L.
2011-08-01
In a hydrodynamic sense, a coral reef is a complex array of obstacles that exerts a net drag force on water moving over the reef. This drag is typically parameterized in ocean circulation models using drag coefficients (CD) or roughness length scales (z0); however, published CD for coral reefs span two orders of magnitude, posing a challenge to predictive modeling. Here we examine the reasons for the large range in reported CD and assess the limitations of using CD and z0 to parameterize drag on reefs. Using a formal framework based on the 3-D spatially averaged momentum equations, we show that CD and z0 are functions of canopy geometry and velocity profile shape. Using an idealized two-layer model, we illustrate that CD can vary by more than an order of magnitude for the same geometry and flow depending on the reference velocity selected and that differences in definition account for much of the range in reported CD values. Roughness length scales z0 are typically used in 3-D circulation models to adjust CD for reference height, but this relies on spatially averaged near-bottom velocity profiles being logarithmic. Measurements from a shallow backreef indicate that z0 determined from fits to point measurements of velocity profiles can be very different from z0 required to parameterize spatially averaged drag. More sophisticated parameterizations for drag and shear stresses are required to simulate 3-D velocity fields over shallow reefs; in the meantime, we urge caution when using published CD and z0 values for coral reefs.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2017-04-01
In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10
New parameterizations and sensitivities for simple climate models
NASA Technical Reports Server (NTRS)
Graves, Charles E.; Lee, Wan-Ho; North, Gerald R.
1993-01-01
This paper presents a reexamination of the earth radiation budget parameterization of energy balance climate models in light of data collected over the last 12 years. The study consists of three parts: (1) an examination of the infrared terrestrial radiation to space and its relationship to the surface temperature field on time scales from 1 month to 10 years; (2) an examination of the albedo of the earth with special attention to the seasonal cycle of snow and clouds; (3) solutions for the seasonal cycle using the new parameterizations with special attention to changes in sensitivity. While the infrared parameterization is not dramatically different from that used in the past, the albedo in the new data suggest that a stronger latitude dependence be employed. After retuning the diffusion coefficient the simulation results for the present climate generally show only a slight dependence on the new parameters. Also, the sensitivity parameter for the model is still about the same (1.25 C for a 1 percent increase of solar constant) for the linear models and for the nonlinear models that include a seasonal snow line albedo feedback (1.34 C). One interesting feature is that a clear-sky planet with a snow line albedo feedback has a significantly higher sensitivity (2.57 C) due to the absence of smoothing normally occurring in the presence of average cloud cover.
Data-driven RBE parameterization for helium ion beams.
Mairani, A; Magro, G; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T
2016-01-21
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter (α/β)ph of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the RBEα = αHe/αph and Rβ = βHe/βph ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (RBE10) are compared with the experimental ones. Pearson's correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with (α/β)ph = 5.4 Gy at the entrance of a 56.4 MeV u(-1)He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and (α/β)ph as input parameters is proposed, allowing a straightforward implementation in a TP system.
Stochastic parameterization for deep convection in NCAR CAM5
NASA Astrophysics Data System (ADS)
Wang, Yong; Zhang, Guangjun; Craig, George
2017-04-01
Most convective parameterization schemes in current global climate models are deterministic. As the model resolution increases, the stochastic behavior of convection becomes important. In this study, the Plant-Craig (PC) stochastic convective parameterization scheme is implemented into the NCAR Community Atmosphere Model CAM5 to couple with the Zhang-McFarlane (ZM) deterministic convection scheme. To evaluate its effect, simulations were conducted to compare with the standard ZM deterministic convection scheme. Results show that the PC stochastic parameterization alleviates many of the common biases in the climate simulation in CAM5, such as double intertropical convergence zone (ITCZ), too-much-drizzle and weak intraseasonal variability of precipitation. The stochastic scheme also increases the large-scale precipitation because of more detrained water from convection, making it in better agreement with TRMM observations. Low cloud fraction simulated by the stochastic scheme is reduced, resulting in an improvement of shortwave cloud forcing (SWCF). Other climate mean states such as liquid water path (LWP) and precipitable water are also improved.
UQ-Guided Selection of Physical Parameterizations in Climate Models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Debusschere, B.; Ghan, S.; Rosa, D.; Bulaevskaya, V.; Anderson, G. J.; Chowdhary, K.; Qian, Y.; Lin, G.; Larson, V. E.; Zhang, G. J.; Randall, D. A.
2015-12-01
Given two or more parameterizations that represent the same physical process in a climate model, scientists are sometimes faced with difficult decisions about which scheme to choose for their simulations and analysis. These decisions are often based on subjective criteria, such as "which scheme is easier to use, is computationally less expensive, or produces results that look better?" Uncertainty quantification (UQ) and model selection methods can be used to objectively rank the performance of different physical parameterizations by increasing the preference for schemes that fit observational data better, while at the same time penalizing schemes that are overly complex or have excessive degrees-of-freedom. Following these principles, we are developing a perturbed-parameter UQ framework to assist in the selection of parameterizations for a climate model. Preliminary results will be presented on the application of the framework to assess the performance of two alternate schemes for simulating tropical deep convection (CLUBB-SILHS and ZM-trigmem) in the U.S. Dept. of Energy's ACME climate model. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, is supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC), and is released as LLNL-ABS-675799.
Optimizing EDMF parameterization for stratocumulus-topped boundary layer
NASA Astrophysics Data System (ADS)
Jones, C. R.; Bretherton, C. S.; Witek, M. L.; Suselj, K.
2014-12-01
We present progress in the development of an Eddy Diffusion / Mass Flux (EDMF) turbulence parameterization, with the goal of improving the representation of the cloudy boundary layer in NCEP's Global Forecast System (GFS), as part of a multi-institution Climate Process Team (CPT). Current GFS versions substantially under-predict cloud amount and cloud radiative impact over much of the globe, leading to large biases in the surface and top of atmosphere energy budgets. As part of the effort to correct these biases, the CPT is developing a new EDMF turbulence scheme for GFS, in which local turbulent mixing is represented by an eddy diffusion term while nonlocal shallow convection is represented by a mass flux term. The sum of both contributions provides the total turbulent flux. Our goal is for this scheme to more skillfully simulate cloud radiative properties without negatively impacting other measures of weather forecast skill. One particular challenge faced by an EDMF parameterization is to be able to handle stratocumulus regimes as well as shallow cumulus regimes. In order to isolate the behavior of the proposed EDMF parameterization and aid in its further development, we have implemented the scheme in a portable MATLAB single column model (SCM). We use this SCM framework to optimize the simulation of stratocumulus cloud top entrainment and boundary layer decoupling.
Does convective aggregation need to be represented in cumulus parameterizations?
NASA Astrophysics Data System (ADS)
Tobin, Isabelle; Bony, Sandrine; Holloway, Chris E.; Grandpeix, Jean-Yves; Sèze, Geneviève; Coppin, David; Woolnough, Steve J.; Roca, Rémy
2013-12-01
Tropical deep convection exhibits a variety of levels of aggregation over a wide range of scales. Based on a multisatellite analysis, the present study shows at mesoscale that different levels of aggregation are statistically associated with differing large-scale atmospheric states, despite similar convective intensity and large-scale forcings. The more aggregated the convection, the dryer and less cloudy the atmosphere, the stronger the outgoing longwave radiation, and the lower the planetary albedo. This suggests that mesoscale convective aggregation has the potential to affect couplings between moisture and convection and between convection, radiation, and large-scale ascent. In so doing, aggregation may play a role in phenomena such as "hot spots" or the Madden-Julian Oscillation. These findings support the need for the representation of mesoscale organization in cumulus parameterizations; most parameterizations used in current climate models lack any such representation. The ability of a cloud system-resolving model to reproduce observed relationships suggests that such models may be useful to guide attempts at parameterizations of convective aggregation.
Inverse groundwater modeling with emphasis on model parameterization
NASA Astrophysics Data System (ADS)
Kourakos, George; Mantoglou, Aristotelis
2012-05-01
This study develops an inverse method aiming to circumvent the subjective decision regarding model parameterization and complexity in inverse groundwater modeling. The number of parameters is included as a decision variable along with parameter values. A parameterization based on B-spline surfaces (BSS) is selected to approximate transmissivity, and genetic algorithms were selected to perform error minimization. A transform based on linear least squares (LLS) is developed, so that different parameterizations may be combined by standard genetic algorithm operators. First, three applications, with isotropic, anisotropic, and zoned aquifer parameters, are examined in a single objective optimization problem and the estimated transmissivity is found to be near the true one. Interestingly, in the anisotropic case, the algorithm converged to a solution with an anisotropic distribution of control points. Next, a single objective optimization with regularization, penalizing complex models, is considered, and last, the problem is expressed in a multiobjective optimization framework (MOO), where the goals are simultaneous minimization of calibration error and model complexity. The result of MOO is a Pareto set of potential solutions where the user can examine the tradeoffs between calibration error and model complexity and select the most suitable model. By comparing calibration with prediction errors, it appears, that the most promising models are the ones near a region where the rate of decrease of calibration error as model complexity increases drops (bend of error curve). This is a useful result of practical interest in real inverse modeling applications.
Parameterization of aerosol scavenging due to atmospheric ionization
NASA Astrophysics Data System (ADS)
Tinsley, Brian A.; Zhou, Limin
2015-08-01
A new approach to parameterizing the modulation of aerosol scavenging by electric charges on particles and droplets gives improved accuracy and is applied over an extended range of droplet and particle radii relevant to cloud microphysical processes. The base level scavenging rates for small particles are dominated by diffusion and for large particles by intercept, weight, and flow effects. For charged particles encountering uncharged droplets, in all cases there is an increase in the scavenging rates, due to the image force. For droplets with charges of opposite sign to those of the particle charge, the rates are further increased, due to the Coulomb force, whereas for droplet with charges of the same sign, the rates are decreased. Increases above the base level (electroscavenging) predominate for the larger particles and occur in the interior of clouds even when no space charge (net charge) is present. Decreases below the base level (electroantiscavenging) occur for same-sign charges with smaller particles. The rates for uncharged droplets are parameterized, and the effect of charges on the droplets then parameterized as a departure from those rates. The results are convenient for incorporation in models of clouds which include detailed microphysics, to model the electrically induced reductions and increases in cloud condensation nucleus and ice forming nucleus concentrations and size distributions and contact ice nucleation rates that affect coagulation and precipitation and cloud albedo. Implications for effects on weather and climate, due both to externally and internally induced variability in atmospheric ionization, are outlined.
Synthesis of Entrainment and Detrainment formulations for Convection Parameterizations
NASA Astrophysics Data System (ADS)
Siebesma, P.
2015-12-01
Mixing between convective clouds and its environment, usually parameterized in terms of entrainment and detrainment, are among the most important processes that determine the strength of the climate model sensitivity. This notion has led to a renaissance of research in exploring the mechanisms of these mixing processes and, as a result, to a wide range of seemingly different parameterized formulations. In this study we are aiming to synthesize these results as to offer a solid framework for use in parameterized formulations of convection. Detailed LES analyses in which clouds are subsampled according to their size show that entrainment rates are inversely proportional to the typical cloud radius, in accordance with original entraining plume models. These results can be shown analytically to be consistent with entrainment rate formulations of cloud ensembles that decrease inversely proportional with height, by making only mild assumptions on the shape of the associated cloud size distribution. In addition there are additional dependencies of the entrainment rates on the environmental thermodynamics such as the relative humidity and stability but these are of second order. In contrast detrainment rates do depend to first order on the environmental thermodynamics such as relative humidity and stability. This can be understood by realizing that i) the details of the cloud size distribution do depend on these environmental factors and ii) that detrainment rates have a much stronger dependency on the shape of the cloud size distribution than entrainment rates.
A satellite observation test bed for cloud parameterization development
NASA Astrophysics Data System (ADS)
Lebsock, M. D.; Suselj, K.
2015-12-01
We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.
Yao, M.S.; Stone, P.H.
1987-01-01
The moist convection parameterization used in the GISS 3-D GCM is adapted for use in a two-dimensional (2-D) zonally averaged statistical-dynamical model. Experiments with different versions of the parameterization show that its impact on the general circulation in the 2-D model does not parallel its impact in the 3-D model unless the effect of zonal variations is parameterized in the moist convection calculations. A parameterization of the variations in moist static energy is introduced in which the temperature variations are calculated from baroclinic stability theory, and the relative humidity is assumed to be constant. Inclusion of the zonal variations of moist static energy in the 2-D moist convection parameterization allows just a fraction of a latitude circle to be unstable and enhances the amount of deep convection. This leads to a 2-D simulation of the general circulation very similar to that in the 3-D model. The experiments show that the general circulation is sensitive to the parameterized amount of deep convection in the subsident branch of the Hadley cell. The more there is, the weaker are the Hadley cell circulations and the westerly jets. The experiments also confirm the effects of momentum mixing associated with moist convection found by earlier investigators and, in addition, show that the momentum mixing weakens the Ferrel cell. An experiment in which the moist convection was removed while the hydrological cycle was retained and the eddy forcing was held fixed shows that moist convection by itself stabilizes the tropics, reduces the Hadley circulation, and reduces the maximum speeds in the westerly jets.
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Stone, Peter H.
1987-01-01
The moist convection parameterization used in the GISS 3-D GCM is adapted for use in a two-dimensional (2-D) zonally averaged statistical-dynamical model. Experiments with different versions of the parameterization show that its impact on the general circulation in the 2-D model does not parallel its impact in the 3-D model unless the effect of zonal variations is parameterized in the moist convection calculations. A parameterization of the variations in moist static energy is introduced in which the temperature variations are calculated from baroclinic stability theory, and the relative humidity is assumed to be constant. Inclusion of the zonal variations of moist static energy in the 2-D moist convection parameterization allows just a fraction of a latitude circle to be unstable and enhances the amount of deep convection. This leads to a 2-D simulation of the general circulation very similar to that in the 3-D model. The experiments show that the general circulation is sensitive to the parameterized amount of deep convection in the subsident branch of the Hadley cell. The more there is, the weaker are the Hadley cell circulations and the westerly jets. The experiments also confirm the effects of momentum mixing associated with moist convection found by earlier investigators and, in addition, show that the momentum mixing weakens the Ferrel cell. An experiment in which the moist convection was removed while the hydrological cycle was retained and the eddy forcing was held fixed shows that moist convection by itself stabilizes the tropics, reduces the Hadley circulation, and reduces the maximum speeds in the westerly jets.
NASA Technical Reports Server (NTRS)
Qiu, Jinhuan; Huang, Qirong
1992-01-01
The study of the inversion algorithm for the single scatter lidar equation, for quantitative determination of cloud (or aerosol) optical properties, has received much attention over the last thirty years. Some of the difficulties associated with the solution of this equation are not yet solved. One problem is that a single scatter lidar equation has two unknowns. Because of this, the determination of the far-end boundary value, in the case of Klett's algorithm, is a problem if the atmosphere is optically inhomogeneous. Another difficulty concerns multiple scattering. There is a large error in the extinction distribution solution, in many cases, if only the single scattering component is considered, while neglecting the multiple scattering component. However, the use of multiple scattering in the remote sensing of aerosol or cloud optical properties is promising. In our early study, an inversion method for simultaneous determination of the cloud (or aerosol) Extinction Coefficient Distribution (ECD) and its Forward Scattering Phase Function (FSPF) was proposed according to multiply scattered lidar returns with two fields of view for the receiver. The method is based on a parameterized multiple scatter lidar equation. This paper is devoted to further numerical tests and an experimental study of lidar measurements of cloud ECD and FSPF using this method.
NASA Technical Reports Server (NTRS)
Qiu, Jinhuan; Huang, Qirong
1992-01-01
The study of the inversion algorithm for the single scatter lidar equation, for quantitative determination of cloud (or aerosol) optical properties, has received much attention over the last thirty years. Some of the difficulties associated with the solution of this equation are not yet solved. One problem is that a single scatter lidar equation has two unknowns. Because of this, the determination of the far-end boundary value, in the case of Klett's algorithm, is a problem if the atmosphere is optically inhomogeneous. Another difficulty concerns multiple scattering. There is a large error in the extinction distribution solution, in many cases, if only the single scattering component is considered, while neglecting the multiple scattering component. However, the use of multiple scattering in the remote sensing of aerosol or cloud optical properties is promising. In our early study, an inversion method for simultaneous determination of the cloud (or aerosol) Extinction Coefficient Distribution (ECD) and its Forward Scattering Phase Function (FSPF) was proposed according to multiply scattered lidar returns with two fields of view for the receiver. The method is based on a parameterized multiple scatter lidar equation. This paper is devoted to further numerical tests and an experimental study of lidar measurements of cloud ECD and FSPF using this method.
... and infections. It also helps filter the blood. Description The spleen is removed while you are under ... cuts in the belly. The surgeon inserts an instrument called a laparoscope through one of the cuts. ...
Investigation of scattering in lunar seismic coda
NASA Astrophysics Data System (ADS)
Blanchette-Guertin, J.-F.; Johnson, C. L.; Lawrence, J. F.
2012-06-01
We investigate the intrinsic attenuation and scattering properties of the Moon by parameterizing the coda decay of 369 higher-quality lunar seismograms from 72 events via their characteristic rise and decay times. We investigate any dependence of the decay times on source type, frequency, and epicentral distance. Intrinsic attenuation, scattering, and possible focusing of energy in a near-surface, low-velocity layer all contribute to the coda decay. Although it is not possible to quantify the exact contribution of each of these effects in the seismograms, results suggest that scattering in a near-surface global layer dominates the records of shallow events (˜0-200 km depth), particularly at frequencies above 2 Hz, and for increasing epicentral distance. We propose that the scattering layer is the megaregolith and that energy from shallow sources encounters more scatterers as it travels longer distances in the layer, increasing the coda decay times. A size distribution of ejecta blocks that has more small-scale than large-scale scatterers intensifies this effect for increasing frequencies. Deep moonquakes (700-1100 km depth) exhibit no dependence of the decay time on epicentral distance. We suggest that because of their large depths and small amplitudes, deep moonquakes from any distance sample a similar region near a given receiver. Near-station structure and geology may also control the decay times of local events, as evidenced by two natural impact records. This study provides constraints and testable hypotheses for waveform modeling of the lunar interior that includes the effects of intense scattering and shallow, low-velocity layers.
High-precision positioning of radar scatterers
NASA Astrophysics Data System (ADS)
Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.
2016-05-01
Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.
Burris, Katy; Kim, Karen
2007-01-01
Tattoos have been a part of costume, expression, and identification in various cultures for centuries. Although tattoos have become more popular in western culture, many people regret their tattoos in later years. In this situation, it is important to be aware of the mechanisms of tattoo removal methods available, as well as their potential short- and long-term effects. Among the myriad of options available, laser tattoo removal is the current treatment of choice, given its safety and efficacy.
A parameterization method and application in breast tomosynthesis dosimetry
Li, Xinhua; Zhang, Da; Liu, Bob
2013-09-15
Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA
Liou, K. N.; Takano, Y.; He, Cenlin; Yang, P.; Leung, Lai-Yung R.; Gu, Y.; Lee, W- L.
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.
Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe
2011-01-01
Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal. Copyright © 2011 S. Karger AG, Basel.
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in
Rayleigh scattering. [molecular scattering terminology redefined
NASA Technical Reports Server (NTRS)
Young, A. T.
1981-01-01
The physical phenomena of molecular scattering are examined with the objective of redefining the confusing terminology currently used. The following definitions are proposed: molecular scattering consists of Rayleigh and vibrational Raman scattering; the Rayleigh scattering consists of rotational Raman lines and the central Cabannes line; the Cabannes line is composed of the Brillouin doublet and the central Gross or Landau-Placzek line. The term 'Rayleigh line' should never be used.
Strauss, Keith J; Racadio, John M; Abruzzo, Todd A; Johnson, Neil D; Patel, Manish N; Kukreja, Kamlesh U; den Hartog, Mark J H; Hoonaert, Bart P A; Nachabe, Rami A
2015-09-08
The purpose of this study was to reduce pediatric doses while maintaining or improv-ing image quality scores without removing the grid from X-ray beam. This study was approved by the Institutional Animal Care and Use Committee. Three piglets (5, 14, and 20 kg) were imaged using six different selectable detector air kerma (Kair) per frame values (100%, 70%, 50%, 35%, 25%, 17.5%) with and without the grid. Number of distal branches visualized with diagnostic confidence relative to the injected vessel defined image quality score. Five pediatric interventional radiologists evaluated all images. Image quality score and piglet Kair were statistically compared using analysis of variance and receiver operating curve analysis to define the preferred dose setting and use of grid for a visibility of 2nd and 3rd order vessel branches. Grid removal reduced both dose to subject and imaging quality by 26%. Third order branches could only be visualized with the grid present; 100% detector Kair was required for smallest pig, while 70% detector Kair was adequate for the two larger pigs. Second order branches could be visualized with grid at 17.5% detector Kair for all three pig sizes. Without the grid, 50%, 35%, and 35% detector Kair were required for smallest to largest pig, respectively. Grid removal reduces both dose and image quality score. Image quality scores can be maintained with less dose to subject with the grid in the beam as opposed to removed. Smaller anatomy requires more dose to the detector to achieve the same image quality score.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a (3)He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the (3)He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
NASA Astrophysics Data System (ADS)
Brittingham, John; Townsend, Lawrence; Barzilla, Janet; Lee, Kerry
2012-03-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared to well-known data and the calculations of other deterministic and Monte Carlo codes. Results will be presented.
NASA Astrophysics Data System (ADS)
Hall, Carlton Raden
A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf
Limits to parameterizing brown carbon absorption in models
NASA Astrophysics Data System (ADS)
Forrister, H.; Liu, J.; Zhang, Y.; Wang, Y.; Dibb, J. E.; Scheuer, E. M.; Anderson, B. E.; Thornhill, K. L., II; Schwarz, J. P.; Perring, A. E.; Jimenez, J. L.; Campuzano-Jost, P.; Diskin, G. S.; Nenes, A.; Weber, R. J.
2016-12-01
Absorbing aerosols emitted from biomass burning, like black carbon (BC) and brown carbon (BrC), affect radiative forcing and photochemical processing by absorbing light in the ultraviolet and visible wavelengths. The degree to which BC affects radiative forcing, as well as its sources and overall concentrations in the atmosphere, has been reasonably characterized through measurements and models. BrC constitutes a multitude of organic aerosol (OA) molecules that absorb light, is difficult to measure directly, and has sources not well-understood—so its effects on global radiative forcing have not been effectively modeled. Recently, laboratory measurements showed that kOA (the absorption term of the complex refractive index for organic aerosol) can be parameterized for fresh biomass burning emissions using the ratio of BC-to-OA and the wavelength dependence (w) of the aerosol. We use SEAC4RS and DC3 airborne filter measurements of BrC and airborne aerosol measurements of BC, OA, and w over the United States in order to investigate the degree to which this parameterization can be used to predict BrC absorption in the atmosphere at a range of altitudes. The previously suggested parameterization can characterize smoke plumes with fresh emissions, but fails to represent the regional characteristics. We discuss possible reasons behind this disagreement, including different aging mechanisms, semi-volatile properties, and atmospheric processing for BrC that is not consistent with BC or OA. Our findings have important implications for future measurements and models of BrC, as well as calculations of radiative forcing both regionally and globally.
Towards a new parameterization of ice particles growth
NASA Astrophysics Data System (ADS)
Krakovska, Svitlana; Khotyayintsev, Volodymyr; Bardakov, Roman; Shpyg, Vitaliy
2017-04-01
Ice particles are the main component of polar clouds, unlike in warmer regions. That is why correct representation of ice particle formation and growth in NWP and other numerical atmospheric models is crucial for understanding of the whole chain of water transformation, including precipitation formation and its further deposition as snow in polar glaciers. Currently, parameterization of ice in atmospheric models is among the most difficult challenges. In the presented research, we present a renewed theoretical analysis of the evolution of mixed cloud or cold fog from the moment of ice nuclei activation until complete crystallization. The simplified model is proposed that includes both supercooled cloud droplets and initially uniform particles of ice, as well as water vapor. We obtain independent dimensionless input parameters of a cloud, and find main scenarios and stages of evolution of the microphysical state of the cloud. The characteristic times and particle sizes have been found, as well as the peculiarities of microphysical processes at each stage of evolution. In the future, the proposed original and physically grounded approximations may serve as a basis for a new scientifically substantiated and numerically efficient parameterizations of microphysical processes in mixed clouds for modern atmospheric models. The relevance of theoretical analysis is confirmed by numerical modeling for a wide range of combinations of possible conditions in the atmosphere, including cold polar regions. The main conclusion of the research is that until complete disappearance of cloud droplets, the growth of ice particles occurs at a practically constant humidity corresponding to the saturated humidity over water, regardless to all other parameters of a cloud. This process can be described by the one differential equation of the first order. Moreover, a dimensionless parameter has been proposed as a quantitative criterion of a transition from dominant depositional to intense
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Analytical Parameterizations of Diffusion: The Convective Boundary Layer.
NASA Astrophysics Data System (ADS)
Briggs, Gary A.
1985-11-01
A brief review is made of data bases which have been used for developing diffusion parameterizations for the convective boundary layer (CBL). A variety of parameterizations for lateral and vertical dispersion, y and z, are surveyed; some of these include mechanical turbulence, source height, or buoyancy effects. Recommendations are made for choosing among these alternatives, depending on the type of source. Because observations of passive plumes indicate that the Gaussian model does a poor job of describing vertical diffusion in the CBL, alternative models for predicting dimensionless crosswind integrated ground concentration, Cy, are reviewed and compared. These include an analytical equation which closely approximates laboratory results; this equation can be applied to any source height > 0.04zi, where zi is the mixing depth. An analysis of a limited amount of buoyant plume data indicates that a radically different approach is needed when the dimensionless buoyancy flux, F(, exceeds 0.1. Such plumes impinge on the `lid' of the mixing layer before ground impact occurs, and residual plume buoyancy causes enhanced lateral spreading under the lid; the observations indicate that y approximates the x law that applies to buoyant plume rise when F( > 0.06. The residual buoyancy also causes a delay in downward mixing that is proportional to F(. The main consequence of these two effects is that maximum ground concentration is reduced, compared to that from passive plumes, and is independent of wind speed. For smaller F(, the observations indicate that, with an assumed plume rise h = 3ziF(, several different Cy parameterizations give satisfactory results, including a Gaussian model.
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-01-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the ‘kinome’ at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model’s two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed. PMID:27601856
Model parameterization as method for data analysis in dendroecology
NASA Astrophysics Data System (ADS)
Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita
2017-04-01
There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays.
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-05-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the 'kinome' at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model's two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed.
Aerosol hygroscopic growth parameterization based on a solute specific coefficient
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.
2011-09-01
Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
Parameterization of interatomic potential by genetic algorithms: A case study
Ghosh, Partha S. Arya, A.; Dey, G. K.; Ranawat, Y. S.
2015-06-24
A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.
CCPP-ARM Parameterization Testbed Model Forecast Data
Klein, Stephen
2008-01-15
Dataset contains the NCAR CAM3 (Collins et al., 2004) and GFDL AM2 (GFDL GAMDT, 2004) forecast data at locations close to the ARM research sites. These data are generated from a series of multi-day forecasts in which both CAM3 and AM2 are initialized at 00Z every day with the ECMWF reanalysis data (ERA-40), for the year 1997 and 2000 and initialized with both the NASA DAO Reanalyses and the NCEP GDAS data for the year 2004. The DOE CCPP-ARM Parameterization Testbed (CAPT) project assesses climate models using numerical weather prediction techniques in conjunction with high quality field measurements (e.g. ARM data).
Longwave radiation parameterization for UCLA/GLAS GCM
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Corsetti, T.
1984-01-01
This document describes the parameterization of longwave radiation in the UCLA/GLAS general circulation model. Transmittances have been computed from the work of Arking and Chou for water vapor and carbon dioxide and ozone absorptances are computed using a formula due to Rodgers. Cloudiness has been introduced into the code in a manner in which fractional cover and random or maximal overlap can be accommodated. The entire code has been written in a form that is amenable to vectorization on CYBER and CRAY computers. Sample clear sky computations for five standard profiles using the 15- and 9-level versions of the model have been included.
Solar and terrestrial parameterizations for radiative-convective models
NASA Astrophysics Data System (ADS)
Vardavas, I. M.; Carver, J. H.
1984-10-01
A radiative-convective modelling technique with parameterizations, for both solar and terrestrial radiation transfer, is presented which allows the rapid computation of the mean vertical temperature profile from the ground to the thermosphere. This method has been specifically designed for modelling the evolution of the earth's mean vertical temperature structure due to changes in atmospheric composition, variations in the solar flux, surface albedo, cloud cover, water vapor, and lapse rate, and changes in the temperature of the thermosphere which is associated with solar activity.
Parameterization of Northern Hemisphere volcanic activity since 1500
NASA Astrophysics Data System (ADS)
Schoenwiese, C.-D.
1986-08-01
The effects of volcanic activity on climate are investigated using volcanism parameterizations. A new volcanic activity parameter, the Smithsonian volcanism index (SVI), was developed, based on the Smithsonian Institute historical volcano chronology of Simkin et al. (1981). The SVI parameter is compared with the dust veil index (DVI) of Lamb (1970, 1983) and the Crete ice core measurements of Hammer (1983); correlation analyses of these parameters reveal abrupt changes. The data also reveal that since 1700 AD there is a correlation between the ground-level air temperature in the Northern Hemisphere and SVI and ice core data; however, the DVI possibly overstimates this correlation.
Parameterized local hybrid functionals from density-matrix similarity metrics.
Janesko, Benjamin G; Scuseria, Gustavo E
2008-02-28
We recently proposed a real-space similarity metric comparing the Kohn-Sham one-particle density matrix to the local spin-density approximation model density matrix [Janesko and Scuseria, J. Chem. Phys. 127, 164117 (2007)]. This metric provides a useful ingredient for constructing local hybrid density functionals that locally mix exact exchange and semilocal density functional theory exchange. Here we present two lines of inquiry: An approximate similarity metric comparing exact versus generalized gradient approximation (GGA), exchange and parameterized mixing functions using these similarity metrics. This approach yields significantly improved thermochemistry, including GGA local hybrids whose thermochemical performance approaches GGA global hybrids.
Modeling and parameterization of horizontally inhomogeneous cloud radiative properties
NASA Technical Reports Server (NTRS)
Welch, R. M.
1995-01-01
One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.
Stochastic parameterization of moist convection estimated from LES data
NASA Astrophysics Data System (ADS)
Dorrestijn, J.; Crommelin, D.; Biello, J. A.; Böing, S.; Siebesma, P.; Jonker, H. J.
2012-12-01
We report on the development of a methodology for stochastic parameterization of moist convection in General Circulation Models (GCMs), using the data-driven approach proposed by (1). We use data from convection-resolving Large-Eddy Simulation (LES) to estimate stochastic processes that represent convection. These stochastic processes take the form of Markov chains that are conditioned on the resolved scale state of the atmosphere. They mimic, in a computationally inexpensive manner, the convective behaviour observed in the LES. We explore cases of shallow and deep convection. In the first case we use LES data of shallow cumulus convection (2). The Markov chains switch between different vertical flux profiles of turbulent heat and moisture. We show that our model is able to reproduce the correct variability of the fluxes, i.e. close to the variability that is observed in the LES data. In the second case we use LES data of the development of deep convection (3). Here the Markov chains switch between different cloud types, similar to the multicloud model of Khouider et al. 2010 (4). Each Markov chain represents the cloud type (convective state) on small horizontal domain (in our case, 150 x 150 meter^2). By grouping these small domains in large blocks that match the size of a GCM grid box, the parameterization can be employed for GCM grid boxes of different sizes. The fractions of the various cloud types in these large blocks determine the total convective transport in each block. We show that the the evolution of the cloud fractions is well captured. We also demonstrate that nearest-neighbor coupling of the Markov chains improves the variability of the stochastically generated cloud fractions. Such spatially coupled Markov chains are equivalent to stochastic cellular automata. References: (1) Crommelin, D. & Vanden Eijnden, E. 2008 Subgrid-Scale Parametrization with Conditional Markov Chains. J. Atmos. Sci. 65, 2661--2675. (2) Dorrestijn, J., Crommelin, D
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Magic neutrino mass matrix and the Bjorken Harrison Scott parameterization
NASA Astrophysics Data System (ADS)
Lam, C. S.
2006-09-01
Observed neutrino mixing can be described by a tribimaximal MNS matrix. The resulting neutrino mass matrix in the basis of a diagonal charged lepton mass matrix is both 2-3 symmetric and magic. By a magic matrix, I mean one whose row sums and column sums are all identical. I study what happens if 2-3 symmetry is broken but the magic symmetry is kept intact. In that case, the mixing matrix is parameterized by a single complex parameter Ue 3, in a form discussed recently by Bjorken, Harrison, and Scott.
Presentation covered five topics; arsenic chemistry, best available technology (BAT), surface water technology, ground water technology and case studies of arsenic removal. The discussion on arsenic chemistry focused on the need and method of speciation for AsIII and AsV. BAT me...
Intercomparisons of land-surface parameterizations coupled to a limited area forecast model
NASA Astrophysics Data System (ADS)
Timbal, B.; Henderson-Sellers, A.
1998-12-01
The goal of the Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) is to improve the understanding of the interactions between the atmosphere and the continental surface in climate and weather forecast models. In PILPS Phase 4(b), selected schemes are coupled to the Limited Area Prediction System (LAPS) developed by the Australian Bureau of Meteorology. To facilitate the comparison of PILPS schemes' behavior within LAPS, a single mode of coupling is selected: explicit coupling. This type of coupling is more flexible and avoids most of the problems raised when interchanging the surface schemes. Exploratory tests are conducted. Initially, experiments are run in which the land-surface schemes use the same parameters as in their original host models. Then, in other runs, the most important surface parameters are set constant in an attempt to reduce the scatter amongst the schemes' results. In order to understand the impact of initialisation of soil moisture on the schemes' results some extreme cases (wet and dry) are performed. The partitioning between surface fluxes is studied as well as the soil moisture budget. Both regional and local results are analysed. Sensitivity between LSS is found in the precipitation field with rainfall over the Australian continent altering by about 20%, but no significant change is found in the net radiation. The scatter in the surface energy fluxes amongst the schemes is large (up to 300 W m -2 locally, during the daytime peak) but is seldom affected by the choice of surface parameters. The dynamical range of flux partitioning between extremely dry and wet initialisation varies strongly amongst the schemes. Some major shortcoming with the BUCKET approach are seen in the re-evaporation of convective precipitation over dry land, in the very large evaporation from wet surfaces and the diurnal cycle of surface temperature.
Haedersdal, Merete; Haak, Christina S
2011-01-01
Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair, white skin, and a normal hormonal status. Currently, no method of lifelong permanent hair eradication is available, and it is important that patients have realistic expectations. Substantial evidence has been found for short-term hair removal efficacy of up to 6 months after treatment with the available systems. Evidence has been found for long-term hair removal efficacy beyond 6 months after repetitive treatments with alexandrite, diode, and long-pulse Nd:YAG lasers, whereas the current long-term evidence is sparse for IPL devices. Treatment parameters must be adjusted to patient skin type and chromophore. Longer wavelengths and cooling are safer for patients with darker skin types. Hair removal with lasers and IPL sources are generally safe treatment procedures when performed by properly educated operators. However, safety issues must be addressed since burns and adverse events do occur. New treatment procedures are evolving. Consumer-based treatments with portable home devices are rapidly evolving, and presently include low-level diode lasers and IPL devices.
NASA Astrophysics Data System (ADS)
Yang, Z.
2011-12-01
Noah-MP, which improves over the standard Noah land surface model, is unique among all land surface models in that it has multi-parameterization options (hence Noah-MP), capable of producing thousands of parameterization schemes, in addition to its improved physical realism (multi-layer snowpack, groundwater dynamics, and vegetation dynamics). All these features are critical for ensemble hydrological simulations and climate predictions at intraseasonal to decadal timescales. This talk will focus on evaluation of the Noah-MP simulations of energy, water and carbon balances for different sub-basins in the Mississippi River in comparison with various observations. The analysis is performed on daily and monthly scales spanning from January 2000 to December 2009. We will show how different runoff schemes in Noah-MP affect the scatter patterns between runoff and water table depth and between gross primary productivity and total water storage change, a type of analysis that would help us identify the relationships between key water storage terms (groundwater, soil moisture, snow) and fluxes (GPP, sensible heat, evapotranspiration, runoff). Similarly, we want to see how other options affect the patterns, such as the beta parameter (i.e. the soil moisture parameter controlling transpiration of plants), the Ball-Berry and Jarvis options for stomatal resistance, and the dynamic vegetation options (on or off). We will compare the water storage simulations from Noah-MP, observations and other model estimates, which would help determine the strengths and limitations of the Noah-MP groundwater and hydrological schemes.
Improving microphysics in a convective parameterization: possibilities and limitations
NASA Astrophysics Data System (ADS)
Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak
2017-04-01
The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.
An updated subgrid orographic parameterization for global atmospheric forecast models
NASA Astrophysics Data System (ADS)
Choi, Hyun-Joo; Hong, Song-You
2015-12-01
A subgrid orographic parameterization (SOP) is updated by including the effects of orographic anisotropy and flow-blocking drag (FBD). The impact of the updated SOP on short-range forecasts is investigated using a global atmospheric forecast model applied to a heavy snowfall event over Korea on 4 January 2010. When the SOP is updated, the orographic drag in the lower troposphere noticeably increases owing to the additional FBD over mountainous regions. The enhanced drag directly weakens the excessive wind speed in the low troposphere and indirectly improves the temperature and mass fields over East Asia. In addition, the snowfall overestimation over Korea is improved by the reduced heat fluxes from the surface. The forecast improvements are robust regardless of the horizontal resolution of the model between T126 and T510. The parameterization is statistically evaluated based on the skill of the medium-range forecasts for February 2014. For the medium-range forecasts, the skill improvements of the wind speed and temperature in the low troposphere are observed globally and for East Asia while both positive and negative effects appear indirectly in the middle-upper troposphere. The statistical skill for the precipitation is mostly improved due to the improvements in the synoptic fields. The improvements are also found for seasonal simulation throughout the troposphere and stratosphere during boreal winter.
Comparison of parameterizations for homogeneous and heterogeneous ice nucleation
NASA Astrophysics Data System (ADS)
Koop, T.; Zobrist, B.
2009-04-01
The formation of ice particles from liquid aqueous aerosols is of central importance for the physics and chemistry of high altitude clouds. In this paper, we present new laboratory data on ice nucleation and compare them with two different parameterizations for homogeneous as well as heterogeneous ice nucleation. In particular, we discuss and evaluate the effect of solutes and ice nuclei. One parameterization is the Î»-approach which correlates the depression of the freezing temperature of aqueous droplets in comparison to pure water droplets, Tf, with the corresponding depression, Tm, of the equilibrium ice melting point: Tf = Î» × Tm. Here, Î» is independent of concentration and a constant that is specific for a particular solute or solute/ice nucleus combination. The other approach is water-activity-based ice nucleation theory which describes the effects of solutes on the freezing temperature Tf via their effect on water activity: aw(Tf) = awi(Tf) + aw. Here, awi is the water activity of ice and aw is a constant that depends on the ice nucleus but is independent of the type of solute. We present new data on both homogeneous and heterogeneous ice nucleation with varying types of solutes and ice nuclei. We evaluate and discuss the advantages and limitations of the two approaches for the prediction of ice nucleation in laboratory experiments and atmospheric cloud models.
Bulk Parameterization of the Snow Field in a Cloud Model.
NASA Astrophysics Data System (ADS)
Lin, Yuh-Lang; Farley, Richard D.; Orville, Harold D.
1983-06-01
A two-dimensional, time-dependent cloud model has been used to simulate a moderate intensity thunderstorm for the High Plains region. Six forms of water substance (water vapor, cloud water, cloud ice, rain, snow and hail, i.e., graupel) are simulated. The model utilizes the `bulk water' microphysical parameterization technique to represent the precipitation fields which are all assumed to follow exponential size distribution functions. Autoconversion concepts are used to parameterize the collision-coalescence and collision-aggregation processes. Accretion processes involving the various forms of liquid and solid hydrometeors are simulated in this model. The transformation of cloud ice to snow through autoconversion (aggregation) and Bergeron process and subsequent accretional growth or aggregation to form hail are simulated. Hail is also produced by various contact mechanisms and via probabilistic freezing of raindrops. Evaporation (sublimation) is considered for all precipitation particles outside the cloud. The melting of hail and snow are included in the model. Wet and dry growth of hail and shedding of rain from hail are simulated.The simulations show that the inclusion of snow has improved the realism of the results compared to a model without snow. The formation of virga from cloud anvils is now modeled. Addition of the snow field has resulted in the inclusion of more diverse and physically sound mechanisms for initiating the hail field, yielding greater potential for distinguishing dominant embryo types characteristically different from warm- and cold-based clouds.
Evaluation of an Urban Canopy Parameterization in a Mesoscale Model
Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J
2004-03-18
A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.
LES of wind turbine wakes: Evaluation of turbine parameterizations
NASA Astrophysics Data System (ADS)
Porte-Agel, Fernando; Wu, Yu-Ting; Chamorro, Leonardo
2009-11-01
Large-eddy simulation (LES), coupled with a wind-turbine model, is used to investigate the characteristics of wind turbine wakes in turbulent boundary layers under different thermal stratification conditions. The subgrid-scale (SGS) stress and SGS heat flux are parameterized using scale-dependent Lagrangian dynamic models (Stoll and Porte-Agel, 2006). The turbine-induced lift and drag forces are parameterized using two models: an actuator disk model (ADM) that distributes the force loading on the rotor disk; and an actuator line model (ALM) that distributes the forces on lines that follow the position of the blades. Simulation results are compared to wind-tunnel measurements collected with hot-wire and cold-wire anemometry in the wake of a miniature 3-blade wind turbine at the St. Anthony Falls Laboratory atmospheric boundary layer wind tunnel. In general, the characteristics of the wakes simulated with the proposed LES framework are in good agreement with the measurements. The ALM is better able to capture vortical structures induced by the blades in the near-wake region. Our results also show that the scale-dependent Lagrangian dynamic SGS models are able to account, without tuning, for the effects of local shear and flow anisotropy on the distribution of the SGS model coefficients.
Parameterization of Vegetation Aerodynamic Roughness of Natural Regions Satellite Imagery
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard; Stewart, Pamela
1998-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. The parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
Transient Storage Parameterization of Wetland-dominated Stream Reaches
NASA Astrophysics Data System (ADS)
Wilderotter, S. M.; Lightbody, A.; Kalnejais, L. H.; Wollheim, W. M.
2014-12-01
Current understanding of the importance of transient storage in fluvial wetlands is limited. Wetlands that have higher connectivity to the main stream channel are important because they have the potential to retain more nitrogen within the river system than wetlands that receive little direct stream discharge. In this study, we investigated how stream water accesses adjacent fluvial wetlands in New England coastal watersheds to improve parameterization in network-scale models. Break through curves of Rhodamine WT were collected for eight wetlands in the Ipswich and Parker (MA) and Lamprey River (NH) watersheds, USA. The curves were inverse modeled using STAMMT-L to optimize the connectivity and size parameters for each reach. Two approaches were tested, a single dominant storage zone and a range of storage zones represented using a power-law distribution of storage zone connectivity. Multiple linear regression analyses were conducted to relate transient storage parameters to stream discharge, area, length-to-width ratio, and reach slope. Resulting regressions will enable more accurate parameterization of surface water transient storage in network-scale models.
Evaluation of debris-flow model parameterization through laboratory investigations
NASA Astrophysics Data System (ADS)
Kaitna, Roland; Rickenmann, Dieter; Huebl, Johannes
2017-04-01
In engineering practice simulation tools for predicting the flow and deposition behavior of debris flows are often based on of simple rheologic equations describing bulk flow resistance. Model parameterization and validation is often connected to large uncertainties due to the lack of field data. Moreover it has been shown that debris flow simulation models are generally limited in representing actual flow mechanics of most natural flows. In this contribution we test the possibility to parameterize simple flow models by laboratory investigations at different scales. We estimate parameters for the Bingham model from a suite of laboratory experiments in different setups, including a standard viscometer, a tilt board, a conveyor belt, and a rotating drum. Material samples were taken from fresh deposits of a muddy debris flow and analyzed over a range of volumetric sediment concentrations and maximum grain sizes. Our results are relatively consistent between most setups. Estimated rheologic parameters show an exponential dependence on volumetric sediment concentration and a systematic variation for mixtures of different maximum grain sizes. Our data shows that a rheologic interpretation of bulk flow behavior seems feasible at the laboratory scale, but the possibility of extrapolation of rheologic parameters for the prototype flow to be directly used in numerical simulation tools is expected to be limited.
Evaluation of a New Parameterization for Fair-Weather Cumulus
Berg, Larry K.; Stull, Roland B.
2006-05-25
A new parameterization for boundary layer cumulus clouds, called the cumulus potential (CuP) scheme, is introduced. This scheme uses joint probability density functions (JPDFs) of virtual potential temperature and water-vapor mixing ratio, as well as the mean vertical profiles of virtual potential temperature, to predict the amount and size distribution of boundary layer cloud cover. This model considers the diversity of air parcels over a heterogeneous surface, and recognizes that some parcels rise above their lifting condensation level to become cumulus, while other parcels might rise as clear updrafts. This model has several unique features: 1) surface heterogeneity is represented using the boundary layer JPDF of virtual potential temperature versus water-vapor mixing ratio, 2) clear and cloudy thermals are allowed to coexist at the same altitude, and 3) a range of cloud-base heights, cloud-top heights, and cloud thicknesses are predicted within any one cloud field, as observed. Using data from Boundary Layer Experiment 1996 and a model intercomparsion study using large eddy simulation (LES) based on Barbados Oceanographic and Meteorological Experiment (BOMEX), it is shown that the CuP model does a good job predicting cloud-base height and cloud-top height. The model also shows promise in predicting cloud cover, and is found to give better cloud-cover estimates than three other cumulus parameterizations: one based on relative humidity, a statistical scheme based on the saturation deficit, and a slab model.
NASA Astrophysics Data System (ADS)
Ferrari, Erika; Corbari, Chiara; Mancini, Marco
2017-04-01
A correct evaluation of the aerodynamic resistance to heat transfer,rah, is fundamental in several fields of application, such as sustainable water management at the basin scale and irrigation planning at the field scale. This is due to the fact that this variable has a significant impact on the estimation of surface heat fluxes, sensible and latent heats (H and LE), and, consequently, of evapotranspiration (ET), which plays a key role in the hydrological cycle and in land-atmosphere interaction. Thus, the analysis focuses on the validation of some parameterizations for rah for different vegetation types and surface roughness. In particular, eight equations chosen from literature (either in accordance with the Monin-Obukhov theory or empirical, with different assumption and levels of simplification) were compared with two estimates of aerodynamic resistance from eddy covariance measurements (one for momentum, ram, and one for scalars, rah) in a maize canopy, low crops and a forest. In order to assure data quality, observations have been selected considering only unstable conditions, where eddy covariance measurements techniques theoretical framework is respected. The analysis has been carried out distinguishing also the different growing phases of the vegetation, from bare soil to the maximum vegetation height. In accordance with the results of the validation phase, the most reliable parameterizations have been implemented in the distributed hydrological model FEST-EWB, in order to evaluate the effect of rah on the estimation of H and ET over different vegetation coverages.
The impact of forest architecture parameterization on GPP simulations
NASA Astrophysics Data System (ADS)
Firanj, Ana; Lalic, Branislava; Podrascanin, Zorica
2015-08-01
The presence of a forest strongly affects ecosystem fluxes by acting as a source or sink of mass and energy. The objective of this study was to investigate the influence of the vertical forest heterogeneity parameterization on gross primary production (GPP) simulations. To introduce a heterogeneity effect, a new method for the upscaling of the leaf level GPP is proposed. This upscaling method is based on the relationship between the leaf area index ( LAI) and the leaf area density ( LAD) profiles and the standard sun/shade leaf separation method. The effect of the crown shape and foliage distribution parameterization on the simulated GPP is confirmed in a comparison study between the proposed method and the standard sun/shade upscaling method. The observed values used in the comparison study are assimilated during the vegetation period on three distinguished forest eddy-covariance (EC) measurement sites chosen for the diversity of their morphological characteristics. The obtained results show (a) the sensitivity of the simulated GPP to the leaf area density profile, (b) the capability of the proposed scaling method to calculate the contribution of the different canopy layers to the entire canopy GPP, and (c) a better agreement with the observations of the simulated GPP with the proposed upscaling method compared with the standard sun/shade method.
Rapid parameterization of small molecules using the Force Field Toolkit
Mayne, Christopher G.; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C.
2013-01-01
The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics (MD) simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, e.g., GAFF and CGenFF, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, set up multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). PMID:24000174
A Parameterization for the Triggering of Landscape Generated Moist Convection
NASA Technical Reports Server (NTRS)
Lynn, Barry H.; Tao, Wei-Kuo; Abramopoulos, Frank
1998-01-01
A set of relatively high resolution three-dimensional (3D) simulations were produced to investigate the triggering of moist convection by landscape generated mesoscale circulations. The local accumulated rainfall varied monotonically (linearly) with the size of individual landscape patches, demonstrating the need to develop a trigger function that is sensitive to the size of individual patches. A new triggering function that includes the effect of landscapes generated mesoscale circulations over patches of different sizes consists of a parcel's perturbation in vertical velocity (nu(sub 0)), temperature (theta(sub 0)), and moisture (q(sub 0)). Each variable in the triggering function was also sensitive to soil moisture gradients, atmospheric initial conditions, and moist processes. The parcel's vertical velocity, temperature, and moisture perturbation were partitioned into mesoscale and turbulent components. Budget equations were derived for theta(sub 0) and q(sub 0). Of the many terms in this set of budget equations, the turbulent, vertical flux of the mesoscale temperature and moisture contributed most to the triggering of moist convection through the impact of these fluxes on the parcel's temperature and moisture profile. These fluxes needed to be parameterized to obtain theta(sub 0) and q(sub 0). The mesoscale vertical velocity also affected the profile of nu(sub 0). We used similarity theory to parameterize these fluxes as well as the parcel's mesoscale vertical velocity.
Sensitivity of liquid clouds to homogenous freezing parameterizations
Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas
2015-01-01
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at −40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as −30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>−35°C) and low (<−38°C) temperatures. Key Points Homogeneous freezing may be significant as warm as −30°C Homogeneous freezing should not be represented by a threshold approximation There is a need for an improved parameterization of homogeneous ice nucleation PMID:26074652
Locally isometric and conformal parameterization of image manifold
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Kuleshov, A. P.; Yanovich, Yu. A.
2015-12-01
Images can be represented as vectors in a high-dimensional Image space with components specifying light intensities at image pixels. To avoid the `curse of dimensionality', the original high-dimensional image data are transformed into their lower-dimensional features preserving certain subject-driven data properties. These properties can include `information-preserving' when using the constructed low-dimensional features instead of original high-dimensional vectors, as well preserving the distances and angles between the original high-dimensional image vectors. Under the commonly used Manifold assumption that the high-dimensional image data lie on or near a certain unknown low-dimensional Image manifold embedded in an ambient high-dimensional `observation' space, a constructing of the lower-dimensional features consists in constructing an Embedding mapping from the Image manifold to Feature space, which, in turn, determines a low-dimensional parameterization of the Image manifold. We propose a new geometrically motivated Embedding method which constructs a low-dimensional parameterization of the Image manifold and provides the information-preserving property as well as the locally isometric and conformal properties.
An improved ice cloud formation parameterization in the EMAC model
NASA Astrophysics Data System (ADS)
Bacer, Sara; Pozzer, Andrea; Karydis, Vlassis; Tsimpidi, Alexandra; Tost, Holger; Sullivan, Sylvia; Nenes, Athanasios; Barahona, Donifan; Lelieveld, Jos
2017-04-01
Cirrus clouds cover about 30% of the Earth's surface and are an important modulator of the radiative energy budget of the atmosphere. Despite their importance in the global climate system, there are still large uncertainties in understanding the microphysical properties and interactions with aerosols. Ice crystal formation is quite complex and a variety of mechanisms exists for ice nucleation, depending on aerosol characteristics and environmental conditions. Ice crystals can be formed via homogeneous nucleation or heterogeneous nucleation of ice-nucleating particles in different ways (contact, immersion, condensation, deposition). We have implemented the computationally efficient cirrus cloud formation parameterization by Barahona and Nenes (2009) into the EMAC (ECHAM5/MESSy Atmospheric Chemistry) model in order to improve the representation of ice clouds and aerosol-cloud interactions. The parameterization computes the ice crystal number concentration from precursor aerosols and ice-nucleating particles accounting for the competition between homogeneous and heterogeneous nucleation and among different freezing modes. Our work shows the differences and the improvements obtained after the implementation with respect to the previous version of EMAC.
A parameterized model for global insolation under partially cloudy skies
NASA Technical Reports Server (NTRS)
Choudhury, B.
1982-01-01
A simple and efficient parameterization of insolation under partially cloudy skies is discussed and compared with a set of exact radiative transfer results for clear skies, an empirical equation and observations. The parameterization is physically based and requires, as input variables, the ozone path length, precipitable water, Angstrom turbidity, surface air pressure and albedo, fractional cloud-cover and cloud thickness. Multiple reflection between the surface and the overlying atmosphere, and clouds are considered. The albedo of the earth-atmosphere system is also formulated and compared with a set of exact radiative transfer results. As compared to the exact radiative transfer results, the errors in the insolations are generally less than 1 percent, and in the albedo of the earth-atmosphere system less than 10 percent. The errors in the calculated insolations using climatological data are 2-3 percent when compared with many years averaged observations at Maudheim (Antarctica) and at Rockville (U.S.A.). A parametric equation for calculating directly the daily total insolation is also given.
Parameterizing moisture in glacier debris cover using a bucket scheme
NASA Astrophysics Data System (ADS)
Collier, Emily; Nicholson, Lindsey I.; Maussion, Fabien; Mölg, Thomas
2013-04-01
Due to the complexity of treating moisture in supraglacial debris cover, full surface energy balance models to date have neglected both moisture fluxes and phase changes in the debris layer. However, the presence of liquid and frozen water has an important influence on the thermal properties of the debris layer. In addition, large spikes in the latent heat flux over supraglacial debris have been measured, suggesting that neglecting this flux in a surface energy balance calculation may be an inaccurate assumption under certain meteorological conditions. Here, we explore the utility of a bucket scheme for parameterizing moisture fluxes and phase changes in a glacier debris layer. The bucket scheme simulates infiltration of liquid water into pore spaces in the debris cover. The thermal properties of the debris cover, which partially determine the energy flux to the underlying ice, are then computed as a function of the water content and phase. We employ the bucket parameterization in a high-resolution, physically-based, and integrated atmosphere-glacier mass balance model to quantify the importance of moisture on the surface energy and mass balance of debris-covered glaciers through an application over the Karakoram region of the northwestern Himalaya.
A parameterized model for global insolation under partially cloudy skies
NASA Technical Reports Server (NTRS)
Choudhury, B.
1982-01-01
A simple and efficient parameterization of insolation under partially cloudy skies is discussed and compared with a set of exact radiative transfer results for clear skies, an empirical equation and observations. The parameterization is physically based and requires, as input variables, the ozone path length, precipitable water, Angstrom turbidity, surface air pressure and albedo, fractional cloud-cover and cloud thickness. Multiple reflection between the surface and the overlying atmosphere, and clouds are considered. The albedo of the earth-atmosphere system is also formulated and compared with a set of exact radiative transfer results. As compared to the exact radiative transfer results, the errors in the insolations are generally less than 1 percent, and in the albedo of the earth-atmosphere system less than 10 percent. The errors in the calculated insolations using climatological data are 2-3 percent when compared with many years averaged observations at Maudheim (Antarctica) and at Rockville (U.S.A.). A parametric equation for calculating directly the daily total insolation is also given.
NASA Astrophysics Data System (ADS)
Koch, D.; Bond, T.; Kinne, S.; Klimont, Z.; Sun, H.; van Aardenne, J.; van der Werf, G.
2006-12-01
Estimates of human influence on climate are especially hindered by poor constraint on the amount of anthropogenic carbonaceous aerosol absorption in the atmosphere. Coordination of observation and model analyses attempt to constrain particle absorption amount, however these are limited by uncertainties in aerosol emission estimates, model scavenging parameterization, aerosol size assumption, contributions from organic aerosol absorption, air concentration observational techniques and by sparsity of data coverage. We perform multiple simulations using GISS modelE and six present-day emission estimates for black carbon (BC) and organic carbon (OC) (Bond et al 2004 middle and upper estimates, IIASA, EDGAR, GFED v1 and v2); for one of these emissions we apply 4 different BC/OC scavenging parameterizations. The resulting concentrations will be compared with a new compilation of observed BC/OC concentrations. We then use these model concentrations, together with effective radius assumptions and estimates of OC absorption to calculate a range of carbonaceous aerosol absorption. We constrain the wavelength-dependent model τ- absorption with AERONET sun-photometer observations. We will discuss regions, seasons and emission sectors with greatest uncertainty, including those where observational constraint is lacking. We calculate the range of model radiative forcing from our simulations and discuss the degree to which it is constrained by observations.
NASA Astrophysics Data System (ADS)
Zhu, J.; Fox-Kemper, B.; Bachman, S.; Van Roekel, L. P.; Hamlington, P.; Taylor, J. R.; Thomas, L. N.
2016-02-01
We test a variety of traditional and new boundary layer turbulence parameterizations using the MITgcm compared to a truth simulation of two nearby fronts under the effects of waves and winds in a 20km by 20km domain using the NCAR LES at 5m horizontal resolution (Hamlington et al., 2014). As the performance of parameterization is varied under different grid resolutions, different parameterization should be chosen in accordance to the grid configuration to represent boundary layer turbulence, Langmuir turbulence, symmetric instabilities, and mixed layer eddies. Using the same initial conditions and forcing as the resolved truth in NCAR LES, we calculate the time series of turbulence contributions as parameterized in the MITgcm. A multi-physics, multi-resolution ensemble of MITgcm simulations are produced and quantified. Different resolutions are tested for understanding how the grid configuration influences parameterizations. We also examine different variables-passive and active tracers, velocity, and energy-to evaluate when the parameterizations are suitable.
Jacobian transformed and detailed balance approximations for photon induced scattering
NASA Astrophysics Data System (ADS)
Wienke, B. R.; Budge, K. G.; Chang, J. H.; Dahl, J. A.; Hungerford, A. L.
2012-01-01
Photon emission and scattering are enhanced by the number of photons in the final state, and the photon transport equation reflects this in scattering-emission kernels and source terms. This is often a complication in both theoretical and numerical analyzes, requiring approximations and assumptions about background and material temperatures, incident and exiting photon energies, local thermodynamic equilibrium, plus other related aspects of photon scattering and emission. We review earlier schemes parameterizing photon scattering-emission processes, and suggest two alternative schemes. One links the product of photon and electron distributions in the final state to the product in the initial state by Jacobian transformation of kinematical variables (energy and angle), and the other links integrands of scattering kernels in a detailed balance requirement for overall (integrated) induced effects. Compton and inverse Compton differential scattering cross sections are detailed in appropriate limits, numerical integrations are performed over the induced scattering kernel, and for tabulation induced scattering terms are incorporated into effective cross sections for comparisons and numerical estimates. Relativistic electron distributions are assumed for calculations. Both Wien and Planckian distributions are contrasted for impact on induced scattering as LTE limit points. We find that both transformed and balanced approximations suggest larger induced scattering effects at high photon energies and low electron temperatures, and smaller effects in the opposite limits, compared to previous analyzes, with 10-20% increases in effective cross sections. We also note that both approximations can be simply implemented within existing transport modules or opacity processors as an additional term in the effective scattering cross section. Applications and comparisons include effective cross sections, kernel approximations, and impacts on radiative transport solutions in 1D
A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes
J. O. Pinto, A.H. Lynch
2005-12-14
The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles.
2015-08-01
11 Defense AT&L: July–August 2015 Removing Bureaucracy Katharina G. McFarland McFarland is Assistant Secretary of Defense for Acquisition. I once...managed a new start program to deliver a revolutionary warfighting capability in Battlefield Management/Command and Control . The Service sponsor was...involvement from all of the Service warfighting areas came together to scrub the program requirements due to concern over the “ bureaucracy ” and
NASA Astrophysics Data System (ADS)
Norbury, John W.
2009-05-01
Inclusive kaon, proton, and antiproton production from high-energy proton-proton collisions is studied. Various available parameterizations of Lorentz-invariant, differential cross sections, as a function of transverse momentum and rapidity, are compared with experimental data. This paper shows that the Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best and for antiproton production the Carey parameterization works best. The formulae for these cross sections are suitable for use in high-energy cosmic ray transport codes.
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh; Bacmeister, Julio; Feingold, Graham; Lee, Seoung-soo; Williams, Christopher
2016-09-14
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land. The resulting model will be compared with ARM observations.
Noise suppression in scatter correction for cone-beam CT
Zhu, Lei; Wang, Jing; Xing, Lei
2009-01-01
Scatter correction is crucial to the quality of reconstructed images in x-ray cone-beam computed tomography (CBCT). Most of existing scatter correction methods assume smooth scatter distributions. The high-frequency scatter noise remains in the projection images even after a perfect scatter correction. In this paper, using a clinical CBCT system and a measurement-based scatter correction, the authors show that a scatter correction alone does not provide satisfactory image quality and the loss of the contrast-to-noise ratio (CNR) of the scatter corrected image may overwrite the benefit of scatter removal. To circumvent the problem and truly gain from scatter correction, an effective scatter noise suppression method must be in place. They analyze the noise properties in the projections after scatter correction and propose to use a penalized weighted least-squares (PWLS) algorithm to reduce the noise in the reconstructed images. Experimental results on an evaluation phantom (Catphan©600) show that the proposed algorithm further reduces the reconstruction error in a scatter corrected image from 10.6% to 1.7% and increases the CNR by a factor of 3.6. Significant image quality improvement is also shown in the results on an anthropomorphic phantom, in which the global noise level is reduced and the local streaking artifacts around bones are suppressed. PMID:19378735
Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai
2015-01-01
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmospheric Model version 5.3 (CAM5.3), the effects of preexisting ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of cirrus cloud rather than in the whole area of cirrus cloud. With these improvements, the two unphysical limiters used in the representation of ice nucleation are removed. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The preexisting ice crystals significantly reduce ice number concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably.Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and preexisting ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24×106 m-2) is obviously less than that from the LP (8.46×106 m-2) and BN (5.62×106 m-2) parameterizations. As a result, experiment using the KL parameterization predicts a much smaller anthropogenic aerosol longwave indirect forcing (0.24 W m-2) than that using the LP (0.46 W m-2
SU-E-T-597: Parameterization of the Photon Beam Dosimetry for a Commercial Linear Accelerator
Lebron, S; Lu, B; Yan, G; Kahler, D; Li, J; Barraclough, B; Liu, C
2015-06-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modelled data, (3) a linear accelerator’s (linac) beam characteristics quality assurance process, and (4) establishing a standard data set for data comparison, etcetera. Parameterization of the photon beam dosimetry creates a portable data set that is easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon percentage depth doses(PDD), profiles, and total scatter output factors(Scp). Methods: Scp, PDDs and profiles for different field sizes (from 2×2 to 40×40cm{sup 2}), depths and energies were measured in a linac using a three-dimensional water tank. All data were smoothed and profile data were also centered, symmetrized and geometrically scaled. The Scp and PDD data were analyzed using exponential functions. For modelling of open and wedge field profiles, each side was divided into three regions described by exponential, sigmoid and Gaussian equations. The model’s equations were chosen based on the physical principles described by these dosimetric quantities. The equations’ parameters were determined using a least square optimization method with the minimal amount of measured data necessary. The model’s accuracy was then evaluated via the calculation of absolute differences and distance–to–agreement analysis in low gradient and high gradient regions, respectively. Results: All differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 mm and 0.5 mm, respectively. Differences in the low gradient regions were 0.20 ± 0.20% and 0.50 ± 0.35% for PDDs and profiles, respectively. For Scp data, all differences were less than 0.5%. Conclusion: This novel analytical model with minimum measurement requirements proved to accurately
Building depth images from scattered point cloud
NASA Astrophysics Data System (ADS)
Wei, Shuangfeng; Chen, Hong
2009-10-01
With the equation of plane and sphere, we fit them with Linear Least Squares. To cylinder datum fitting, firstly parameterize GQS equation of cylinder from seven parameters to five parameters, then using Local Paraboloid Construct method based on coordinate translation to get fitting initial values, finally evaluate results by Levenberg-Marquardt--a Nonlinear Linear Least Squares. Algorithm. However, initial values with Local Paraboloid Construct method are unstable. So to improve the precision of cylinder fitting ,a robust cylinder fitting method is put forward, which at first gets initial cylinder parameter values by Gauss Image, then fits cylinder by Nonlinear Least Squares for parameterized distance function. After getting reference datums, this paper proposes the methods of creating depth images from scattered point cloud and the specific steps with reference to different datums. Finally we choose some point cloud data of ancient building components from laser scanning data of Forbidden City in China as experiment data. Experiment results demonstrate the stability and high precision of the method of plane, cylinder and sphere fitting as well as the validity of depth images to represent point cloud of object.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
NASA Astrophysics Data System (ADS)
Brown, Steven S.; Dubé, William P.; Fuchs, Hendrik; Ryerson, Thomas B.; Wollny, Adam G.; Brock, Charles A.; Bahreini, Roya; Middlebrook, Ann M.; Neuman, J. Andrew; Atlas, Elliot; Roberts, James M.; Osthoff, Hans D.; Trainer, Michael; Fehsenfeld, Frederick C.; Ravishankara, A. R.
2009-04-01
This paper presents determinations of reactive uptake coefficients for N2O5, γ(N2O5), on aerosols from nighttime aircraft measurements of ozone, nitrogen oxides, and aerosol surface area on the NOAA P-3 during Second Texas Air Quality Study (TexAQS II). Determinations based on both the steady state approximation for NO3 and N2O5 and a plume modeling approach yielded γ(N2O5) substantially smaller than current parameterizations used for atmospheric modeling and generally in the range 0.5-6 × 10-3. Dependence of γ(N2O5) on variables such as relative humidity and aerosol composition was not apparent in the determinations, although there was considerable scatter in the data. Determinations were also inconsistent with current parameterizations of the rate coefficient for homogenous hydrolysis of N2O5 by water vapor, which may be as much as a factor of 10 too large. Nocturnal halogen activation via conversion of N2O5 to ClNO2 on chloride aerosol was not determinable from these data, although limits based on laboratory parameterizations and maximum nonrefractory aerosol chloride content showed that this chemistry could have been comparable to direct production of HNO3 in some cases.
A stratiform cloud parameterization for General Circulation Models
Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.
1994-05-01
The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species.
FSP (Full Space Parameterization), Version 2.0
Fries, G.A.; Hacker, C.J.; Pin, F.G.
1995-10-01
This paper describes the modifications made to FSPv1.0 for the Full Space Parameterization (FSP) method, a new analytical method used to resolve underspecified systems of algebraic equations. The optimized code recursively searches for the necessary number of linearly independent vectors that are necessary to form the solution space. While doing this, it ensures that all possible combinations of solutions are checked, if needed, and handles complications which arise due to particular cases. In addition, two particular cases which cause failure of the FSP algorithm were discovered during testing of this new code. These cases are described in the context of how they are recognized and how they are handled by the new code. Finally, testing was performed on the new code using both isolated movements and complex trajectories for various mobile manipulators.
Source term parameterization of unresolved obstacles in wave modelling
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Pérez, Jorge; Besio, Giovanni; Mendez, Fernando; Menendez, Melisa
2015-04-01
In the present work we introduce two source terms for the parameterization of energy dissipation due to unresolved obstacles in spectral wave models. The proposed approach differs from the classical one based on spatial propagation schemes because it provides a local representation of phenomena such as unresolved wave energy dissipation. This source term-based approach presents the advantage of decoupling the parameterization of unresolved obstacles from the spatial propagation scheme. Furthermore it opens the way to parameterizations of other unresolved sheltering effects like rotation and frequency shift of spectral components. Energy dissipation due to unresolved obstacles is modeled locally through a Local Dissipation (LD) source term in order to provide a low resolution obstructed cell for the correct average energy. Furthermore a Shadow Effect (SE) source term has been introduced to model the correct energy flux towards downstream cells. The LD-SE scheme source term aims to reproduce in a low resolution grid the average conditions modeled by a high resolution model able to resolve obstacles in an exact way. The LD and SE source terms are expressed as functions of obstructed cell transparency coefficients relative to different spectral components. An interesting finding is that an overall transparency coefficient α for each cell/spectral component is not enough to model adequately the average conditions. A further coefficient β is needed to take into account the layout of the obstacles inside the cell. This coefficient is given by the average transparency of sections starting from the upstream side of the obstructed cell. The mono-dimensional LD and SE source terms are given by: partial F/partial t bigg|LD = - 2 - α_l/βl c_g/Δ x F partial F/partial t bigg|SE= - ( 1 - β_u/α_u)) c_g/Δ x F where "l" and "u" subscripts indicate that α and β coefficients are relative to local and upstream cells respectively. Validation of the source terms has been carried
Sensitivity of liquid clouds to homogenous freezing parameterizations
NASA Astrophysics Data System (ADS)
Herbert, Ross J.; Murray, Benjamin J.; Dobbie, Steven J.; Koop, Thomas
2015-03-01
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at -40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as -30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>-35°C) and low (<-38°C) temperatures.
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical
Cirrus parameterization from the FIRE ER-2 observations
NASA Technical Reports Server (NTRS)
Spinhirne, James D.
1990-01-01
Primary goals for the FIRE field experiments were validation of satellite cloud retrievals and study of cloud radiation parameters. The radiometers and lidar observations which were acquired from the NASA ER-2 high altitude aircraft during the FIRE cirrus field study may be applied to derive quantities which would be applicable for comparison to satellite retrievals and to define the cirrus radiative characteristics. The analysis involves parameterization of the vertical cloud distribution and relative radiance effects. An initial case study from the 28 Oct. 1986 cirrus experiment has been carried out, and results from additional experiment days are to be reported. The observations reported are for 1 day. Analysis of the many other cirrus observation cases from the FIRE study show variability of results.
Applying Software Engineering Metrics to Land Surface Parameterization Schemes.
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Henderson-Sellers, B.; Pollard, D.; Verner, J. M.; Pitman, A. J.
1995-05-01
In addition to model validation techniques and intermodel comparison projects, the authors propose the use of software engineering metrics as an additional tool for the enhancement of `quality' in climate models. By discriminating between internal, directly measurable characteristics of structural complexity, and external characteristics, such as maintainability and comprehensibility, a way to benefit climate modeling by the use of easily derivable metrics is explored. As a small illustration, the results of a pilot project are presented. This is a subproject of the Project for Intercomparison of Landsurface Parameterization Schemes in which the authors use some typical structural complexity metrics, namely, for control flow, size, and coupling. Finally, and purely indicatively, the authors compare the results obtained from these metrics with scientists' subjective views of the psychological complexity of the programs.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
New particle dependant parameterizations of heterogeneous freezing processes.
NASA Astrophysics Data System (ADS)
Diehl, Karoline; Mitra, Subir K.
2014-05-01
For detailed investigations of cloud microphysical processes an adiabatic air parcel model with entrainment is used. It represents a spectral bin model which explicitly solves the microphysical equations. The initiation of the ice phase is parameterized and describes the effects of different types of ice nuclei (mineral dust, soot, biological particles) in immersion, contact, and deposition modes. As part of the research group INUIT (Ice Nuclei research UnIT), existing parameterizations have been modified for the present studies and new parameterizations have been developed mainly on the basis of the outcome of INUIT experiments. Deposition freezing in the model is dependant on the presence of dry particles and on ice supersaturation. The description of contact freezing combines the collision kernel of dry particles with the fraction of frozen drops as function of temperature and particle size. A new parameterization of immersion freezing has been coupled to the mass of insoluble particles contained in the drops using measured numbers of ice active sites per unit mass. Sensitivity studies have been performed with a convective temperature and dew point profile and with two dry aerosol particle number size distributions. Single and coupled freezing processes are studied with different types of ice nuclei (e.g., bacteria, illite, kaolinite, feldspar). The strength of convection is varied so that the simulated cloud reaches different levels of temperature. As a parameter to evaluate the results the ice water fraction is selected which is defined as the relation of the ice water content to the total water content. Ice water fractions between 0.1 and 0.9 represent mixed-phase clouds, larger than 0.9 ice clouds. The results indicate the sensitive parameters for the formation of mixed-phase and ice clouds are: 1. broad particle number size distribution with high number of small particles, 2. temperatures below -25°C, 3. specific mineral dust particles as ice nuclei such
Criteria and algorithms for spectrum parameterization of MST radar signals
NASA Technical Reports Server (NTRS)
Rastogi, P. K.
1984-01-01
The power spectra S(f) of MST radar signals contain useful information about the variance of refractivity fluctuations, the mean radial velocity, and the radial velocity variance in the atmosphere. When noise and other contaminating signals are absent, these quantities can be obtained directly from the zeroth, first and second order moments of the spectra. A step-by-step procedure is outlined that can be used effectively to reduce large amounts of MST radar data-averaged periodograms measured in range and time to a parameterized form. The parameters to which a periodogram can be reduced are outlined and the steps in the procedure, that may be followed selectively, to arrive at the final set of reduced parameters are given. Examples of the performance of the procedure are given and its use with other radars are commented on.
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point
A New Parameterization Framework for Boundary-Layer Cumuli
Berg, Larry K.; Stull, Roland B.
2005-03-14
The cumulus parameterization framework is called the Cumulus Potential (CuP) scheme. Within this framework, the scheme uses Joint Probability Density Functions (JPDFs) of temperature and moisture and the mean temperature profile to predict the amount and size distribution of fair-weather cumuli. This scheme considers the diversity of air parcels over a heterogeneous surface, and recognizes that some rising parcels become cumuli, while other parcels remain clear updrafts. Once a parcel becomes cloudy, the thermodynamic properties and the exchange of mass between the cloud and environment are calculated using a cloud model within the CuP framework. The primary advantages of the new scheme are the prediction of cloud-base mass flux, cloud cover, and a range of cloud-top heights.
Sensitivity of liquid clouds to homogenous freezing parameterizations.
Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas
2015-03-16
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at -40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as -30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>-35°C) and low (<-38°C) temperatures.
Parameterization of Aerosol Sinks in Chemical Transport Models
NASA Technical Reports Server (NTRS)
Colarco, Peter
2012-01-01
The modelers point of view is that the aerosol problem is one of sources, evolution, and sinks. Relative to evolution and sink processes, enormous attention is given to the problem of aerosols sources, whether inventory based (e.g., fossil fuel emissions) or dynamic (e.g., dust, sea salt, biomass burning). On the other hand, aerosol losses in models are a major factor in controlling the aerosol distribution and lifetime. Here we shine some light on how aerosol sinks are treated in modern chemical transport models. We discuss the mechanisms of dry and wet loss processes and the parameterizations for those processes in a single model (GEOS-5). We survey the literature of other modeling studies. We additionally compare the budgets of aerosol losses in several of the ICAP models.
Parameterization of ion channeling half-angles and minimum yields
NASA Astrophysics Data System (ADS)
Doyle, Barney L.
2016-03-01
A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for axes and [h k l] planes up to (5 5 5). The program is open source and available at
Coupled radiative convective equilibrium simulations with explicit and parameterized convection
NASA Astrophysics Data System (ADS)
Hohenegger, Cathy; Stevens, Bjorn
2016-09-01
Radiative convective equilibrium has been applied in past studies to various models given its simplicity and analogy to the tropical climate. At convection-permitting resolution, the focus has been on the organization of convection that appears when using fixed sea surface temperature (SST). Here the SST is allowed to freely respond to the surface energy. The goals are to examine and understand the resulting transient behavior, equilibrium state, and perturbations thereof, as well as to compare these results to a simulation integrated with parameterized cloud and convection. Analysis shows that the coupling between the SST and the net surface energy acts to delay the onset of self-aggregation and may prevent it, in our case, for a slab ocean of less than 1 m. This is so because SST gradients tend to oppose the shallow low-level circulation that is associated with the self-aggregation of convection. Furthermore, the occurrence of self-aggregation is found to be necessary for reaching an equilibrium state and avoiding a greenhouse-like climate. In analogy to the present climate, the self-aggregation generates the dry and clear subtropics that allow the system to efficiently cool. In contrast, strong shortwave cloud radiative effects, much stronger than at convection-permitting resolution, prevent the simulation with parameterized cloud and convection to fall into a greenhouse state. The convection-permitting simulations also suggest that cloud feedbacks, as arising when perturbing the equilibrium state, may be very different, and in our case less negative, than what emerges from general circulation models.
Parameterization of wind turbine impacts on hydrodynamics and sediment transport
NASA Astrophysics Data System (ADS)
Rivier, Aurélie; Bennis, Anne-Claire; Pinon, Grégory; Magar, Vanesa; Gross, Markus
2016-10-01
Monopile foundations of offshore wind turbines modify the hydrodynamics and sediment transport at local and regional scales. The aim of this work is to assess these modifications and to parameterize them in a regional model. In the present study, this is achieved through a regional circulation model, coupled with a sediment transport module, using two approaches. One approach is to explicitly model the monopiles in the mesh as dry cells, and the other is to parameterize them by adding a drag force term to the momentum and turbulence equations. Idealised cases are run using hydrodynamical conditions and sediment grain sizes typical from the area located off Courseulles-sur-Mer (Normandy, France), where an offshore windfarm is under planning, to assess the capacity of the model to reproduce the effect of the monopile on the environment. Then, the model is applied to a real configuration on an area including the future offshore windfarm of Courseulles-sur-Mer. Four monopiles are represented in the model using both approaches, and modifications of the hydrodynamics and sediment transport are assessed over a tidal cycle. In relation to local hydrodynamic effects, it is observed that currents increase at the side of the monopile and decrease in front of and downstream of the monopile. In relation to sediment transport effect, the results show that resuspension and erosion occur around the monopile in locations where the current speed increases due to the monopile presence, and sediments deposit downstream where the bed shear stress is lower. During the tidal cycle, wakes downstream of the monopile reach the following monopile and modify the velocity magnitude and suspended sediment concentration patterns around the second monopile.
Synthesizing 3D Surfaces from Parameterized Strip Charts
NASA Technical Reports Server (NTRS)
Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri
2004-01-01
We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.
Systematic Parameterization of Monovalent Ions Employing the Nonbonded Model.
Li, Pengfei; Song, Lin Frank; Merz, Kenneth M
2015-04-14
Monovalent ions play fundamental roles in many biological processes in organisms. Modeling these ions in molecular simulations continues to be a challenging problem. The 12-6 Lennard-Jones (LJ) nonbonded model is widely used to model monovalent ions in classical molecular dynamics simulations. A lot of parameterization efforts have been reported for these ions with a number of experimental end points. However, some reported parameter sets do not have a good balance between the two Lennard-Jones parameters (the van der Waals (VDW) radius and potential well depth), which affects their transferability. In the present work, via the use of a noble gas curve we fitted in former work (J. Chem. Theory Comput. 2013, 9, 2733), we reoptimized the 12-6 LJ parameters for 15 monovalent ions (11 positive and 4 negative ions) for three extensively used water models (TIP3P, SPC/E, and TIP4P(EW)). Since the 12-6 LJ nonbonded model performs poorly in some instances for these ions, we have also parameterized the 12-6-4 LJ-type nonbonded model (J. Chem. Theory Comput. 2014, 10, 289) using the same three water models. The three derived parameter sets focused on reproducing the hydration free energies (the HFE set) and the ion-oxygen distance (the IOD set) using the 12-6 LJ nonbonded model and the 12-6-4 LJ-type nonbonded model (the 12-6-4 set) overall give improved results. In particular, the final parameter sets showed better agreement with quantum mechanically calculated VDW radii and improved transferability to ion-pair solutions when compared to previous parameter sets.
Parameterization of Infrared Absorption in Midlatitude Cirrus Clouds
Sassen, Kenneth; Wang, Zhien; Platt, C.M.R.; Comstock, Jennifer M.
2003-01-01
Employing a new approach based on combined Raman lidar and millimeter-wave radar measurements and a parameterization of the infrared absorption coefficient {sigma}{sub a}(km{sup -1}) in terms of retrieved cloud microphysics, we derive a statistical relation between {sigma}{sub a} and cirrus cloud temperature. The relations {sigma}{sub a} = 0.3949 + 5.3886 x 10{sup -3} T + 1.526 x 10{sup -5} T{sup 2} for ambient temperature (T,{sup o}C), and {sigma}{sub a} = 0.2896 + 3.409 x 10{sup -3} T{sub m} for midcloud temperature (T{sub m}, {sup o}C), are found using a second order polynomial fit. Comparison with two {sigma}{sub a} versus T{sub m} relations obtained primarily from midlatitude cirrus using the combined lidar/infrared radiometer (LIRAD) approach reveals significant differences. However, we show that this reflects both the previous convention used in curve fitting (i. e., {sigma}{sub a} {yields} 0 at {approx} 80 C), and the types of clouds included in the datasets. Without such constraints, convergence is found in the three independent remote sensing datasets within the range of conditions considered valid for cirrus (i.e., cloud optical depth {approx} 3.0 and T{sub m} < {approx}20 C). Hence for completeness we also provide reanalyzed parameterizations for a visible extinction coefficient {sigma}{sub a} versus T{sub m} relation for midlatitude cirrus, and a data sample involving cirrus that evolved into midlevel altostratus clouds with higher optical depths.
Precisely parameterized experimental and computational models of tissue organization.
Molitoris, Jared M; Paliwal, Saurabh; Sekar, Rajesh B; Blake, Robert; Park, JinSeok; Trayanova, Natalia A; Tung, Leslie; Levchenko, Andre
2016-02-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell-cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and topology
Benchmark analysis of parameterization for terrestrial carbon cycle model (Invited)
NASA Astrophysics Data System (ADS)
Luo, Y.; Zhou, X.; Verburg, P.; Arnone, J.
2010-12-01
Parameterization of terrestrial ecosystem models plays an important role in accurately predicting carbon-climate feedback. More and more studies have shown that a fixed set of parameters cannot adequately represent spatial and temporal variations of ecosystem functions over broad geographical locations and/or over long time. In this study, we conducted benchmark analysis of a terrestrial ecosystem (TECO) model against a highly accurate data set from mesocosm study in Ecologically Controlled Enclosed Lysimeter Laboratories (EcoCELLs) at Desert Research Institute, Reno, Nevada. The mesocosm study involved shoot and whole plant harvests in fall, fallow during winter, and fertilization treatments in year 2. We used a Markov chain Monte Carlo (MCMC) technique to estimate parameters of the TECO model and measure the model performance with estimated parameters. Our analysis showed that the model performance with one set of estimated parameters was poor over a two-year experimental duration. The model performance was slightly improved with root exudation as an additional mechanism of carbon transfer from plants to rhizosphere. The performance was significantly improved when five sets of parameters were estimated for five respective periods, which spanned from seeding to shoot harvest in year 1, from shoot to whole plant harvest in year 1, fallow, from seeding to plant harvest with fertilization in year 2, and from plant harvest to the end of the project in year 2. The five sets of parameter values are significantly different, indicating that experimental treatments caused discontinuous (or discrete) changes in ecosystem processes. The discontinuous changes in ecosystem processes pose significant challenges for carbon cycle model parameterization and generate uncertainties for model prediction.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Parameterizing sequence alignment with an explicit evolutionary model.
Rivas, Elena; Eddy, Sean R
2015-12-10
Inference of sequence homology is inherently an evolutionary question, dependent upon evolutionary divergence. However, the insertion and deletion penalties in the most widely used methods for inferring homology by sequence alignment, including BLAST and profile hidden Markov models (profile HMMs), are not based on any explicitly time-dependent evolutionary model. Using one fixed score system (BLOSUM62 with some gap open/extend costs, for example) corresponds to making an unrealistic assumption that all sequence relationships have diverged by the same time. Adoption of explicit time-dependent evolutionary models for scoring insertions and deletions in sequence alignments has been hindered by algorithmic complexity and technical difficulty. We identify and implement several probabilistic evolutionary models compatible with the affine-cost insertion/deletion model used in standard pairwise sequence alignment. Assuming an affine gap cost imposes important restrictions on the realism of the evolutionary models compatible with it, as single insertion events with geometrically distributed lengths do not result in geometrically distributed insert lengths at finite times. Nevertheless, we identify one evolutionary model compatible with symmetric pair HMMs that are the basis for Smith-Waterman pairwise alignment, and two evolutionary models compatible with standard profile-based alignment. We test different aspects of the performance of these "optimized branch length" models, including alignment accuracy and homology coverage (discrimination of residues in a homologous region from nonhomologous flanking residues). We test on benchmarks of both global homologies (full length sequence homologs) and local homologies (homologous subsequences embedded in nonhomologous sequence). Contrary to our expectations, we find that for global homologies a single long branch parameterization suffices both for distant and close homologous relationships. In contrast, we do see an advantage in
Precisely parameterized experimental and computational models of tissue organization†
Sekar, Rajesh B.; Blake, Robert; Park, JinSeok; Trayanova, Natalia A.; Tung, Leslie; Levchenko, Andre
2016-01-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell–cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and
The Parameterization of All Robust Stabilizing Simple Repetitive Controllers
NASA Astrophysics Data System (ADS)
Yamada, Kou; Sakanushi, Tatsuya; Ando, Yoshinori; Hagiwara, Takaaki; Murakami, Iwanori; Takenaga, Hiroshi; Tanaka, Hiroshi; Matsuura, Shun
The modified repetitive control system is a type of servomechanism for the periodic reference input. That is, the modified repetitive control system follows the periodic reference input with small steady state error, even if a periodic disturbance or an uncertainty exists in the plant. Using previously proposed modified repetitive controllers, even if the plant does not include time-delay, transfer functions from the periodic reference input to the output and from the disturbance to the output have infinite numbers of poles. When transfer functions from the periodic reference input to the output and from the disturbance to the output have infinite numbers of poles, it is difficult to specify the input-output characteristic and the disturbance attenuation characteristic. From the practical point of view, it is desirable that the input-output characteristic and the disturbance attenuation characteristic are easily specified. In order to specify the input-output characteristic and the disturbance attenuation characteristic easily, transfer functions from the periodic reference input to the output and from the disturbance to the output are desirable to have finite numbers of poles. From this viewpoint, Yamada et al. proposed the concept of simple repetitive control systems such that the controller works as a modified repetitive controller and transfer functions from the periodic reference input to the output and from the disturbance to the output have finite numbers of poles. In addition, Yamada et al. clarified the parameterization of all stabilizing simple repetitive controllers. However, the method by Yamada et al. cannot be applied for the plant with uncertainty. The purpose of this paper is to propose the parameterization of all robust stabilizing simple repetitive controllers for the plant with uncertainty.
Ameriflux data used for verification of surface layer parameterizations
NASA Astrophysics Data System (ADS)
Tassone, Caterina; Ek, Mike
2015-04-01
The atmospheric surface-layer parameterization is an important component in a coupled model, as its output, the surface exchange coefficients for momentum, heat and humidity, are used to determine the fluxes of these quantities between the land-surface and the atmosphere. An accurate prediction of these fluxes is therefore required in order to provide a correct forecast of the surface temperature, humidity and ultimately also the precipitation in a model. At the NOAA/NCEP Environmental Modeling Center, a one-dimensional Surface Layer Simulator (SLS) has been developed for simulating the surface layer and its interface. Two different configurations of the SLS exist, replicating in essence the way in which the surface layer is simulated in the GFS and the NAM, respectively. Input data for the SLS are the basic atmospheric quantities of winds, temperature, humidity and pressure evaluated at a specific height above the ground, surface values of temperature and humidity, and the momentum roughness length z0. The output values of the SLS are the surface exchange coefficients for heat and momentum. The exchange coefficients computed by the SLS are then compared with independent estimates derived from measured surface heat fluxes. The SLS is driven by a set of Ameriflux data acquired at 22 stations over a period of several years. This provides a large number of different vegetation characteristics and helps ensure statistical significance. Even though there are differences in the respective surface layer formulations between the GFS and the NAM, they are both based on similarity theory, and therefore lower boundary conditions, i.e. roughness lengths for momentum and heat, and profile functions are among the main components of the surface layer that need to be evaluated. The SLS is a very powerful tool for this type of evaluation. We present the results of the Ameriflux comparison and discuss the implications of our results for the surface layer parameterizations of the NAM
Modeling Jupiter's Quasi Quadrennial Oscillation (QQO) with Wave Drag Parameterizations
NASA Astrophysics Data System (ADS)
Cosentino, Rick; Morales-Juberias, Raul; Greathouse, Thomas K.; Orton, Glenn S.
2016-10-01
The QQO in Jupiter's atmosphere was first discovered after 7.8 micron infrared observations spanning the 1980's and 1990's detected a temperature oscillation near 10 hPa (Orton et al. 1991, Science 252, 537, Leovy et. al. 1991, Nature 354, 380, Friedson 1999, Icarus 137, 34). New observations using the Texas Echelon cross-dispersed Echelle Spectrograph (TEXES), mounted on the NASA Infrared Telescope facility (IRTF), have been used to characterize a complete cycle of the QQO between January 2012 and January 2016 (Greathouse et al. 2016, DPS) . These new observations not only show the thermal oscillation at 10 hPa, but they also show that the QQO extends upwards in Jupiter's atmosphere to pressures as high as 0.4 hPa. We incorporated three different wave-drag parameterizations into the EPIC General Circulation Model (Dowling et al. 1998, Icarus 132, 221) to simulate the observed Jovian QQO temperature signatures as a function of latitude, pressure and time using results from the TEXES datasets as new constraints. Each parameterization produces unique results and offers insight into the spectra of waves that likely exist in Jupiter's atmosphere to force the QQO. High-frequency gravity waves produced from convection are extremely difficult to directly observe but likely contribute a significant portion to the QQO momentum budget. We use different models to simulate the effects of waves such as these, to indirectly explore their spectrum in Jupiter's atmosphere by varying their properties. The model temperature outputs show strong correlations to equatorial and mid-latitude temperature fields retrieved from the TEXES datasets at different epochs. Our results suggest the QQO phenomenon could be more than one alternating zonal jet that descends over time in response to Jovian atmospheric forcing (e.g. gravity waves from convection).Research funding provided by the NRAO Grote Reber Pre-Doctoral Fellowship. Computing resources include the NMT PELICAN cluster and the CISL
Parameterization of tree-ring growth in Siberia
NASA Astrophysics Data System (ADS)
Tychkov, Ivan; Popkova, Margarita; Shishov, Vladimir; Vaganov, Eugene
2016-04-01
No doubt, climate-tree growth relationship is an one of the useful and interesting subject of studying in dendrochronology. It provides an information of tree growth dependency on climatic environment, but also, gives information about growth conditions and whole tree-ring growth process for long-term periods. New parameterization approach of the Vaganov-Shashkin process-based model (VS-model) is developed to described critical process linking climate variables with tree-ring formation. The approach (co-called VS-Oscilloscope) is presented as a computer software with graphical interface. As most process-based tree-ring models, VS-model's initial purpose is to describe variability of tree-ring radial growth due to variability of climatic factors, but also to determinate principal factors limiting tree-ring growth. The principal factors affecting on the growth rate of cambial cells in the VS-model are temperature, day light and soil moisture. Detailed testing of VS-Oscilloscope was done for semi-arid area of southern Siberia (Khakassian region). Significant correlations between initial tree-ring chronologies and simulated tree-ring growth curves were obtained. Direct natural observations confirm obtained simulation results including unique growth characteristic for semi-arid habitats. New results concerning formation of wide and narrow rings under different climate conditions are considered. By itself the new parameterization approach (VS-oscilloscope) is an useful instrument for better understanding of various processes in tree-ring formation. The work was supported by the Russian Science Foundation (RSF # 14-14-00219).
Stochastic sea ice parameterizations and impacts on polar predictability
NASA Astrophysics Data System (ADS)
Juricke, Stephan; Goessling, Helge; Jung, Thomas
2015-04-01
Stochastic sea ice parameterizations are implemented in a global coupled model to include first estimates of model uncertainty in the assessment of sea ice predictability. The impact of incorporating estimates of model uncertainty in the sea ice dynamics is compared to the impact of atmospheric initial condition uncertainty. In this context a set of ensembles with stochastic sea ice strength perturbations and a set of ensembles with atmospheric initial condition perturbations are investigated. Seasonal integrations show that especially during the first weeks the incorporation of model uncertainty estimates in the sea ice dynamics leads to a significant increase in ensemble spread of sea ice thickness in the central Arctic and along coastlines when compared to the ensembles with atmospheric initial perturbations. The latter, in contrast, produce significantly larger variability along the ice edge. During the first weeks of the integration, applying the combined perturbations leads to an accumulation of spread from both uncertainties pointing at the importance of including estimates of model uncertainty for subseasonal sea ice predictions. After the first few weeks, however, the differences between ensemble spreads become mostly insignificant so that estimates of seasonal potential sea ice predictability for the Arctic remain largely unaffected by uncertainty estimates in the sea ice dynamics. For the Antarctic sea ice, differences in sea ice thickness spread between the different ensemble configurations are less pronounced throughout the year. Stochastic perturbations are also applied to the sea ice thermodynamics, namely the sea ice albedo parameterization, to investigate the diverse impacts of the incorporation of uncertainty estimates in different parts of the sea ice model, affecting different regions of the polar regions and at different times during the annual cycle.
NASA Astrophysics Data System (ADS)
Scarpa, Riccardo; Thiene, Mara; Hensher, David A.
2012-01-01
Preferences for attributes of complex goods may differ substantially among members of households. Some of these goods, such as tap water, are jointly supplied at the household level. This issue of jointness poses a series of theoretical and empirical challenges to economists engaged in empirical nonmarket valuation studies. While a series of results have already been obtained in the literature, the issue of how to empirically measure these differences, and how sensitive the results are to choice of model specification from the same data, is yet to be clearly understood. In this paper we use data from a widely employed form of stated preference survey for multiattribute goods, namely choice experiments. The salient feature of the data collection is that the same choice experiment was applied to both partners of established couples. The analysis focuses on models that simultaneously handle scale as well as preference heterogeneity in marginal rates of substitution (MRS), thereby isolating true differences between members of couples in their MRS, by removing interpersonal variation in scale. The models employed are different parameterizations of the mixed logit model, including the willingness to pay (WTP)-space model and the generalized multinomial logit model. We find that in this sample there is some evidence of significant statistical differences in values between women and men, but these are of small magnitude and only apply to a few attributes.
ERIC Educational Resources Information Center
di Francia, Giuliano Toraldo
1973-01-01
The art of deriving information about an object from the radiation it scatters was once limited to visible light. Now due to new techniques, much of the modern physical science research utilizes radiation scattering. (DF)
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions.
Schneider, J P; Norbury, J W; Cucinotta, F A
1995-04-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
Parameterization of subgridscale mixing based on quasi-geostrophic turbulence theory
NASA Technical Reports Server (NTRS)
Tokioka, T.
1981-01-01
A parameterization of subgridscale mixing based on quasi-geostrophic turbulence theory is presented. In this parameterization, not only the horizontal diffusion coefficients for momentum and heat, but also the vertical diffusion coefficients for momentum and heat are uniquely determined. A form of the mixing is derived which simulates the subgridscale mixing process in the inertial subrange of enstrophy cascade.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
USDA-ARS?s Scientific Manuscript database
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2003-11-21
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.
2016-04-12
Impact of Parameterized Internal Wave Drag on the Semidiurnal Energy Balance in a Global Ocean Circulation Model* MAARTENC. BUIJSMAN,1 JOSEPHK...of Southern Mississippi, Stennis Space Center, Mississippi #University of Michigan, Ann Arbor, Michigan @Center for Ocean -Atmospheric Prediction...parameterized linear internal wave drag on the semidiurnal barotropic and baroclinic energetics of a realistically forced, three-dimensional global ocean
A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model
Seaman, N.L.; Kain, J.S.; Deng, A.
1996-04-01
A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.
Optimal Analysis-Aware Parameterization of Computational Domain in Isogeometric Analysis
NASA Astrophysics Data System (ADS)
Xu, Gang; Mourrain, Bernard; Duvigneau, Régis; Galligo, André
In isogeometric analysis (IGA for short) framework, computational domain is exactly described using the same representation as that employed in the CAD process. For a CAD object, we can construct various computational domain with same shape but with different parameterization. One basic requirement is that the resulting parameterization should have no self-intersections. In this paper, a linear and easy-to-check sufficient condition for injectivity of planar B-spline parameterization is proposed. By an example of 2D thermal conduction problem, we show that different parameterization of computational domain has different impact on the simulation result and efficiency in IGA. For problems with exact solutions, we propose a shape optimization method to obtain optimal parameterization of computational domain. The proposed injective condition is used to check the injectivity of initial parameterization constructed by discrete Coons method. Several examples and comparisons are presented to show the effectiveness of the proposed method. Compared with the initial parameterization during refinement, the optimal parameterization can achieve the same accuracy but with less degrees of freedom.
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Frank A.
1995-01-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
Improving Surface Flux Parameterizations in the Navy’s Coastal Ocean Atmosphere Prediction System
2006-09-30
coefficient and dissipative heating parameterization for hurricane forecast A new drag coefficient (Donelan et al, 2004) and dissipative heating...quantities. 3. Evaluation of the new drag coefficient and dissipative heating parameterization for hurricane forecast In general, the results are very
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Frank A.
1995-01-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
Sorg, T.J.
1991-01-01
The U.S. Environmental Protection Agency proposed new and revised regulations on radionuclide contaminants in drinking water in June 1991. During the 1980's, the Drinking Water Research Division, USEPA conducted a research program to evaluate various technologies to remove radium, uranium and radon from drinking water. The research consisted of laboratory and field studies conducted by USEPA, universities and consultants. The paper summarizes the results of the most significant projects completed. General information is also presented on the general chemistry of the three radionuclides. The information presented indicates that the most practical treatment methods for radium are ion exchange and lime-soda softening and reverse osmosis. The methods tested for radon are aeration and granular activated carbon and the methods for uranium are anion exchange and reverse osmosis.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Oreopoulos, Lazaros; Ackerman, Andrew S.; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A.; Cady-Pereira, Karen E.; Cole, Jason N. S.; Dufresne, Jean -Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J.; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M.
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m^{2}, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentially unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Dipankar, Anurag
2016-06-01
The parameterization of shallow cumuli across a range of model grid resolutions of kilometre-scales faces at least three major difficulties: (1) closure assumptions of conventional parameterization schemes are no longer valid, (2) stochastic fluctuations become substantial and increase with grid resolution, and (3) convective circulations that emerge on the model grids are under-resolved and grid-scale dependent. Here we develop a stochastic parameterization of shallow cumulus clouds to address the first two points, and we study how this stochastic parameterization interacts with the under-resolved convective circulations in a convective case over the ocean. We couple a stochastic model based on a canonical ensemble of shallow cumuli to the Eddy-Diffusivity Mass-Flux parameterization in the icosahedral nonhydrostatic (ICON) model. The moist-convective area fraction is perturbed by subsampling the distribution of subgrid convective states. These stochastic perturbations represent scale-dependent fluctuations around the quasi-equilibrium state of a shallow cumulus ensemble. The stochastic parameterization reproduces the average and higher order statistics of the shallow cumulus case adequately and converges to the reference statistics with increasing model resolution. The interaction of parameterizations with model dynamics, which is usually not considered when parameterizations are developed, causes a significant influence on convection in the gray zone. The stochastic parameterization interacts strongly with the model dynamics, which changes the regime and energetics of the convective flows compared to the deterministic simulations. As a result of this interaction, the emergence of convective circulations in combination with the stochastic parameterization can even be beneficial on the high-resolution model grids.
Multiple scattering technique lidar
NASA Technical Reports Server (NTRS)
Bissonnette, Luc R.
1992-01-01
The Bernouilli-Ricatti equation is based on the single scattering description of the lidar backscatter return. In practice, especially in low visibility conditions, the effects of multiple scattering can be significant. Instead of considering these multiple scattering effects as a nuisance, we propose here to use them to help resolve the problems of having to assume a backscatter-to-extinction relation and specifying a boundary value for a position far remote from the lidar station. To this end, we have built a four-field-of-view lidar receiver to measure the multiple scattering contributions. The system has been described in a number of publications that also discuss preliminary results illustrating the multiple scattering effects for various environmental conditions. Reported here are recent advances made in the development of a method of inverting the multiple scattering data for the determination of the aerosol scattering coefficient.
Parameterization of Fire Injection Height in Large Scale Transport Model
NASA Astrophysics Data System (ADS)
Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.
2012-12-01
The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to
Parameterization of Fire Injection Height in Large Scale Transport Model
NASA Astrophysics Data System (ADS)
Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.
2012-04-01
The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is
Accuracy of cuticular resistance parameterizations in ammonia dry deposition models
NASA Astrophysics Data System (ADS)
Schrader, Frederik; Brümmer, Christian; Richter, Undine; Fléchard, Chris; Wichink Kruit, Roy; Erisman, Jan Willem
2016-04-01
Accurate representation of total reactive nitrogen (Nr) exchange between ecosystems and the atmosphere is a crucial part of modern air quality models. However, bi-directional exchange of ammonia (NH3), the dominant Nr species in agricultural landscapes, still poses a major source of uncertainty in these models, where especially the treatment of non-stomatal pathways (e.g. exchange with wet leaf surfaces or the ground layer) can be challenging. While complex dynamic leaf surface chemistry models have been shown to successfully reproduce measured ammonia fluxes on the field scale, computational restraints and the lack of necessary input data have so far limited their application in larger scale simulations. A variety of different approaches to modelling dry deposition to leaf surfaces with simplified steady-state parameterizations have therefore arisen in the recent literature. We present a performance assessment of selected cuticular resistance parameterizations by comparing them with ammonia deposition measurements by means of eddy covariance (EC) and the aerodynamic gradient method (AGM) at a number of semi-natural and grassland sites in Europe. First results indicate that using a state-of-the-art uni-directional approach tends to overestimate and using a bi-directional cuticular compensation point approach tends to underestimate cuticular resistance in some cases, consequently leading to systematic errors in the resulting flux estimates. Using the uni-directional model, situations where low ratios of total atmospheric acids to NH3 concentration occur lead to fairly high minimum cuticular resistances, limiting predicted downward fluxes in conditions usually favouring deposition. On the other hand, the bi-directional model used here features a seasonal cycle of external leaf surface emission potentials that can lead to comparably low effective resistance estimates under warm and wet conditions, when in practice an expected increase in the compensation point due to
Geometry parameterization and multidisciplinary constrained optimization of coronary stents.
Pant, Sanjay; Bressloff, Neil W; Limbert, Georges
2012-01-01
Coronary stents are tubular type scaffolds that are deployed, using an inflatable balloon on a catheter, most commonly to recover the lumen size of narrowed (diseased) arterial segments. A common differentiating factor between the numerous stents used in clinical practice today is their geometric design. An ideal stent should have high radial strength to provide good arterial support post-expansion, have high flexibility for easy manoeuvrability during deployment, cause minimal injury to the artery when being expanded and, for drug eluting stents, should provide adequate drug in the arterial tissue. Often, with any stent design, these objectives are in competition such that improvement in one objective is a result of trade-off in others. This study proposes a technique to parameterize stent geometry, by varying the shape of circumferential rings and the links, and assess performance by modelling the processes of balloon expansion and drug diffusion. Finite element analysis is used to expand each stent (through balloon inflation) into contact with a representative diseased coronary artery model, followed by a drug release simulation. Also, a separate model is constructed to measure stent flexibility. Since the computational simulation time for each design is very high (approximately 24 h), a Gaussian process modelling approach is used to analyse the design space corresponding to the proposed parameterization. Four objectives to assess recoil, stress distribution, drug distribution and flexibility are set up to perform optimization studies. In particular, single objective constrained optimization problems are set up to improve the design relative to the baseline geometry-i.e. to improve one objective without compromising the others. Improvements of 8, 6 and 15% are obtained individually for stress, drug and flexibility metrics, respectively. The relative influence of the design features on each objective is quantified in terms of main effects, thereby suggesting the
An Empirical Cumulus Parameterization Scheme for a Global Spectral Model
NASA Technical Reports Server (NTRS)
Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.
2004-01-01
Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This
Scattering resonances in the extreme quantum limit
NASA Astrophysics Data System (ADS)
Hersch, Jesse Shines
This thesis addresses topics in low energy scattering in quantum mechanics, in particular, resonance phenomena. Hence the title: the phrase ``extreme quantum limit'' refers to the situation when the wavelengths of the particles in the system are larger than every other scale, so that the behavior is far into the quantum regime. A powerful tool in the problems of low energy scattering is the point scatterer model, and will be used extensively throughout the thesis. Therefore, we begin with a thorough introduction to this model in Chapter 2. As a first application of the point scatterer model, we will investigate the phenomenon of the proximity resonance, which is one example of strange quantum behavior appearing at low energy. Proximity resonances will be addressed theoretically in Chapter 3, and experimentally in Chapter 4. Threshold resonances, another type of low energy scattering resonance, are considered in Chapter 5, along with their connection to the Efimov and Thomas effects, and scattering in the presence of an external confining potential. Although the point scatterer model will serve us well in the work presented here, it does have its limitations. These limitations will be removed in Chapter 6, where we describe how to extend the model to include higher partial waves. In Chapter 7, we extend the model one step further, and illustrate how to treat vector wave scattering with the model. Finally, in Chapter 8 we will depart from the topic of low energy scattering and investigate the influence of diffraction on an open quantum mechanical system, again both experimentally and theoretically.
Kuo-Nan Liou
2003-12-29
OAK-B135 (a) We developed a 3D radiative transfer model to simulate the transfer of solar and thermal infrared radiation in inhomogeneous cirrus clouds. The model utilized a diffusion approximation approach (four-term expansion in the intensity) employing Cartesian coordinates. The required single-scattering parameters, including the extinction coefficient, single-scattering albedo, and asymmetry factor, for input to the model, were parameterized in terms of the ice water content and mean effective ice crystal size. The incorporation of gaseous absorption in multiple scattering atmospheres was accomplished by means of the correlated k-distribution approach. In addition, the strong forward diffraction nature in the phase function was accounted for in each predivided spatial grid based on a delta-function adjustment. The radiation parameterization developed herein is applied to potential cloud configurations generated from GCMs to investigate broken clouds and cloud-overlapping effects on the domain-averaged heating rate. Cloud inhomogeneity plays an important role in the determination of flux and heating rate distributions. Clouds with maximum overlap tend to produce less heating than those with random overlap. Broken clouds show more solar heating as well as more IR cooling as compared to a continuous cloud field (Gu and Liou, 2001). (b) We incorporated a contemporary radiation parameterization scheme in the UCLA atmospheric GCM in collaboration with the UCLA GCM group. In conjunction with the cloud/radiation process studies, we developed a physically-based cloud cover formation scheme in association with radiation calculations. The model clouds were first vertically grouped in terms of low, middle, and high types. Maximum overlap was then used for each cloud type, followed by random overlap among the three cloud types. Fu and Liou's 1D radiation code with modification was subsequently employed for pixel-by-pixel radiation calculations in the UCLA GCM. We showed
Layer filtering for seafloor scatterers imaging.
Pinson, S; Holland, C W
2015-05-01
The image source method in acoustics is well known to simulate reverberation. It has also been recently used for characterization of seafloor sound-speed structure. The idea is to detect image sources by imaging techniques to obtain information about the environment. In this paper, the idea is to use the detection of image sources to remove reflections from plane interfaces in recorded signals and perform imaging with this filtered signal. This imaging process highlights scatterers because their wave front shapes are different than those from plane interfaces. Applications can be in seafloor buried object detection or scattering analysis from interface roughnesses or volume heterogeneities.
Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.
2009-01-01
Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud
NASA Technical Reports Server (NTRS)
Crane, Robert K.
1988-01-01
The data from Japan and the U.S. (the Virginia Precipitation Scatter Experiment) show excellent agreement between the two-component rain scatter model predictions and bistatic scatter measurements. In employing the model, all the scattering geometries should be classified as backscattering as defined by Crane (1974). The forward scatter model should only be used for great circle paths with both antennas pointed at the horizon and at each other in a typical troposcatter communication system geometry. The forward scatter model can also be used for main-lobe, side-lobe coupling when one antenna is pointed toward the other along the great circle path. The forward scatter observations made over the Prospect Hill - Mt Tug path show that the two-component model is incomplete. Much stronger signals were observed at Ku-band than expected based on simultaneous C-band measurements. The discrepancies may be due to: (1) scattering by ice/snow at height (posssible in April) at the 1 km height of the scattering volume), (2) the coherent effects of turbulent fluctuations in the hydrometeor number densities and (3) errors in the modeling of the statistical relationship between attenuation along the path and scattering in the common volume.
NASA Astrophysics Data System (ADS)
Crane, Robert K.
1988-08-01
The data from Japan and the U.S. (the Virginia Precipitation Scatter Experiment) show excellent agreement between the two-component rain scatter model predictions and bistatic scatter measurements. In employing the model, all the scattering geometries should be classified as backscattering as defined by Crane (1974). The forward scatter model should only be used for great circle paths with both antennas pointed at the horizon and at each other in a typical troposcatter communication system geometry. The forward scatter model can also be used for main-lobe, side-lobe coupling when one antenna is pointed toward the other along the great circle path. The forward scatter observations made over the Prospect Hill - Mt Tug path show that the two-component model is incomplete. Much stronger signals were observed at Ku-band than expected based on simultaneous C-band measurements. The discrepancies may be due to: (1) scattering by ice/snow at height (posssible in April) at the 1 km height of the scattering volume), (2) the coherent effects of turbulent fluctuations in the hydrometeor number densities and (3) errors in the modeling of the statistical relationship between attenuation along the path and scattering in the common volume.
Specialized Knowledge Representation and the Parameterization of Context
Faber, Pamela
2016-01-01
Though instrumental in numerous disciplines, context has no universally accepted definition. In specialized knowledge resources it is timely and necessary to parameterize context with a view to more effectively facilitating knowledge representation, understanding, and acquisition, the main aims of terminological knowledge bases. This entails distinguishing different types of context as well as how they interact with each other. This is not a simple objective to achieve despite the fact that specialized discourse does not have as many contextual variables as those in general language (i.e., figurative meaning, irony, etc.). Even in specialized text, context is an extremely complex concept. In fact, contextual information can be specified in terms of scope or according to the type of information conveyed. It can be a textual excerpt or a whole document; a pragmatic convention or a whole culture; a concrete situation or a prototypical scenario. Although these versions of context are useful for the users of terminological resources, such resources rarely support context modeling. In this paper, we propose a taxonomy of context primarily based on scope (local and global) and further divided into syntactic, semantic, and pragmatic facets. These facets cover the specification of different types of terminological information, such as predicate-argument structure, collocations, semantic relations, term variants, grammatical and lexical cohesion, communicative situations, subject fields, and cultures. PMID:26941674
New layer thickness parameterization of diffusive convection in the ocean
NASA Astrophysics Data System (ADS)
Zhou, Sheng-Qi; Lu, Yuan-Zheng; Song, Xue-Long; Fer, Ilker
2016-03-01
In the present study, a new parameterization is proposed to describe the convecting layer thickness in diffusive convection. By using in situ observational data of diffusive convection in the lakes and oceans, a wide range of stratification and buoyancy flux is obtained, where the buoyancy frequency N varies between 10-4 and 0.1 s-1 and the heat-related buoyancy flux qT varies between 10-12 and 10-7 m2 s-3. We construct an intrinsic thickness scale, H0 =[qT3 / (κTN8) ] 1 / 4, here κT is the thermal diffusivity. H0 is suggested to be the scale of an energy-containing eddy and it can be alternatively represented as H0 = ηRebPr1/4, here η is the dissipation length scale, Reb is the buoyant Reynolds number, and Pr is the Prandtl number. It is found that the convective layer thickness H is directly linked to the stability ratio Rρ and H0 with the form of H ∼ (Rρ - 1)2H0. The layer thickness can be explained by the convective instability mechanism. To each convective layer, its thickness H reaches a stable value when its thermal boundary layer develops to be a new convecting layer.
Parameterization of meandering phenomenon in a stable atmospheric boundary layer
NASA Astrophysics Data System (ADS)
Carvalho, Jonas da Costa; Degrazia, Gervásio Annes; de Vilhena, Marco Túlio; Magalhães, Sergio Garcia; Goulart, Antonio G.; Anfossi, Domenico; Acevedo, Otávio Costa; Moraes, Osvaldo L. L.
2006-08-01
Accounting for the current knowledge of the stable atmospheric boundary layer (ABL) turbulence structure and characteristics, a new formulation for the meandering parameters to be used in a Lagrangian stochastic particle turbulent diffusion model has been derived. That is, expressions for the parameters controlling the meandering oscillation frequency in low wind speed stable conditions are proposed. The classical expression for the meandering autocorrelation function, the turbulent statistical diffusion theory and ABL similarity theory are employed to estimate these parameters. In addition, this new parameterization was introduced into a particular Lagrangian stochastic particle model, which is called Iterative Langevin solution for low wind, validated with the data of Idaho National Laboratory experiments, and compared with others diffusion models. The results of this new approach are shown to agree with the measurements of Idaho experiments and also with those of the other atmospheric diffusion models. The major advance shown in this study is the formulation of the meandering parameters expressed in terms of the characteristic scales (velocity and length scales) describing the physical structure of a turbulent stable boundary layer. These similarity formulas can be used to simulate meandering enhanced diffusion of passive scalars in a low wind speed stable ABL.
Factors influencing the parameterization of tropical anvils within GCMs
Bradley, M.M.; Chin, H.N.S.
1994-03-01
The overall goal of this project is to improve the representation of anvil clouds and their effects in general circulation models (GCMs). We have concentrated on an important portion of the overall goal; the evolution of cumulus-generated anvil clouds and their effects on the large-scale environment. Because of the large range of spatial and temporal scales involved, we have been using a multi-scale approach. For the early-time generation and development of the citrus anvil we are using a cloud-scale model with a horizontal resolution of 1-2 kilometers, while for the transport of anvils by the large-scale flow we are using a mesoscale model with a horizontal resolution of 10-40 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations to develop an improved cloud parameterization for use in GCMS. The cloud-scale simulation of a midlatitude squall line case and the mesoscale study of a tropical anvil using an anvil generator were presented at the last ARM science team meeting. This paper concentrates on the cloud-scale study of a tropical squall line. Results are compared with its midlatitude counterparts to further our understanding of the formation mechanism of anvil clouds and the sensitivity of radiation to their optical properties.
Parameterizations of cloud feedback in a radiative-convective model
NASA Astrophysics Data System (ADS)
Jung, Hans-Josef; Bach, Wilfrid
1985-07-01
The effect of cloud feedback on the response of a radiative-convective model to a change in cloud model parameters, atmospheric CO2 concentration, and solar constant has been studied using two different parameterization schemes. The method for simulating the vertical distribution of both cloud cover and cloud optical thickness, which depends on the relative humidity and on the saturation mixing ratio of water vapor, respectively, is the same in both approaches, but the schemes differ with respect to modeling the water vapor profile. In scheme I atmospheric water vapor is coupled to surface parameters, while in scheme II an explicit balance equation for water vapor in the individual atmospheric layers is used. For both models the combined effect of feedbacks due to variations in lapse rate, cloud cover, and cloud optical thickness results in different relationships between changes in surface temperature, planetary temperature, and cloud cover. Specifically, for a CO2 doubling and a 2% increase in solar constant, in both models the surface warming is reduced by cloud feedback, in contrast to no feedback, with the greater reduction in scheme I as compared to that of scheme II.
Parameterization and classification of the protein universe via geometric techniques.
Tendulkar, Ashish V; Wangikar, Pramod P; Sohoni, Milind A; Samant, Vivekanand V; Mone, Chetan Y
2003-11-14
We present a scheme for the classification of 3487 non-redundant protein structures into 1207 non-hierarchical clusters by using recurring structural patterns of three to six amino acids as keys of classification. This results in several signature patterns, which seem to decide membership of a protein in a functional category. The patterns provide clues to the key residues involved in functional sites as well as in protein-protein interaction. The discovered patterns include a "glutamate double bridge" of superoxide dismutase, the functional interface of the serine protease and inhibitor, interface of homo/hetero dimers, and functional sites of several enzyme families. We use geometric invariants to decide superimposability of structural patterns. This allows the parameterization of patterns and discovery of recurring patterns via clustering. The geometric invariant-based approach eliminates the computationally explosive step of pair-wise comparison of structures. The results provide a vast resource for the biologists for experimental validation of the proposed functional sites, and for the design of synthetic enzymes, inhibitors and drugs.
Population models for passerine birds: structure, parameterization, and analysis
Noon, B.R.; Sauer, J.R.; McCullough, D.R.; Barrett, R.H.
1992-01-01
Population models have great potential as management tools, as they use infonnation about the life history of a species to summarize estimates of fecundity and survival into a description of population change. Models provide a framework for projecting future populations, determining the effects of management decisions on future population dynamics, evaluating extinction probabilities, and addressing a variety of questions of ecological and evolutionary interest. Even when insufficient information exists to allow complete identification of the model, the modelling procedure is useful because it forces the investigator to consider the life history of the species when determining what parameters should be estimated from field studies and provides a context for evaluating the relative importance of demographic parameters. Models have been little used in the study of the population dynamics of passerine birds because of: (1) widespread misunderstandings of the model structures and parameterizations, (2) a lack of knowledge of life histories of many species, (3) difficulties in obtaining statistically reliable estimates of demographic parameters for most passerine species, and (4) confusion about functional relationships among demographic parameters. As a result, studies of passerine demography are often designed inappropriately and fail to provide essential data. We review appropriate models for passerine bird populations and illustrate their possible uses in evaluating the effects of management or other environmental influences on population dynamics. We identify environmental influences on population dynamics. We identify parameters that must be estimated from field data, briefly review existing statistical methods for obtaining valid estimates, and evaluate the present status of knowledge of these parameters.
Parameterization of mires in a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Yurova, Alla; Tolstykh, Mikhail; Nilsson, Mats; Sirin, Andrey
2014-11-01
Mires (peat-accumulating wetlands) occupy 8.1% of Russian territory and are especially numerous in the western Siberian Lowlands, where they can significantly modify atmospheric heat and water balances. They also influence air temperatures and humidity in the boundary layers closest to the earth's surface. The purpose of our study was to incorporate the influence of mires into the SL-AV numerical weather prediction model, which is used operationally in the Hydrometeorological Center of Russia. This was done by adjusting the multilayer soil component (by modifying the peat thermal conductivity in the heat diffusion equation and reformulating the lower boundary condition for Richard's equation), and reformulating both the evapotranspiration and runoff from mires. When evaporation from mires was incorporated into the SL-AV model, the latent heat flux in the areas dominated by mires increased strongly, resulting in surface cooling and hence reductions in the sensible heat flux and outgoing terrestrial long-wave radiation. Presented results show that including mires significantly decreased the bias and RMSE of predictions of temperature and relative humidity 2 m above the ground for lead times of 12, 36, and 60 h from 00 h Coordinated Universal Time (evening conditions), but did not eliminate the bias in forecasts for lead times of 24, 48, and 72 h (morning conditions) in Siberia. Different parameterizations of mire evapotranspiration are also compared.
Comparison of parameterized cloud variability to ARM data.
Klein, Stephen A.; Norris, Joel R.
2003-06-23
Cloud parameterizations in large-scale models often try to predict the amount of sub-grid scale variability in cloud properties to address the significant non-linear effects of radiation and precipitation. Statistical cloud schemes provide an attractive framework to self-consistently predict the variability in radiation and microphysics but require accurate predictions of the width and asymmetry of the distribution of cloud properties. Data from the Atmospheric Radiation Measurement (ARM) program are used to assess the variability in boundary layer cloud properties for a well- mixed stratocumulus observed at the Oklahoma ARM site during the March 2000 Intensive Observing Period. Cloud boundaries, liquid water content, and liquid water path are retrieved from the millimeter wavelength cloud radar and the microwave radiometer. Balloon soundings, aircraft data, and satellite observations provide complementary views on the horizontal cloud inhomogeneity. It is shown that the width of the liquid water path probability distribution function is consistent with a model in which horizontal fluctuations in liquid water content are vertically coherent throughout the depth of the cloud. Variability in cloud base is overestimated by this model, however; perhaps because an additional assumption that the variance of total water is constant with altitude throughout the depth of the boundary layer is incorrect.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2004-05-06
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
A trans-dimensional polynomial-spline parameterization for gradient-based geoacoustic inversion.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
This paper presents a polynomial spline-based parameterization for trans-dimensional geoacoustic inversion. The parameterization is demonstrated for both simulated and measured data and shown to be an effective method of representing sediment geoacoustic profiles dominated by gradients, as typically occur, for example, in muddy seabeds. Specifically, the spline parameterization is compared using the deviance information criterion (DIC) to the standard stack-of-homogeneous layers parameterization for the inversion of bottom-loss data measured at a muddy seabed experiment site on the Malta Plateau. The DIC is an information criterion that is well suited to trans-D Bayesian inversion and is introduced to geoacoustics in this paper. Inversion results for both parameterizations are in good agreement with measurements on a sediment core extracted at the site. However, the spline parameterization more accurately resolves the power-law like structure of the core density profile and provides smaller overall uncertainties in geoacoustic parameters. In addition, the spline parameterization is found to be more parsimonious, and hence preferred, according to the DIC. The trans-dimensional polynomial spline approach is general, and applicable to any inverse problem for gradient-based profiles. [Work supported by ONR.].
NASA Astrophysics Data System (ADS)
Khvorostyanov, V. I.; Curry, J. A.
2012-10-01
A new analytical parameterization of homogeneous ice nucleation is developed based on extended classical nucleation theory including new equations for the critical radii of the ice germs, free energies and nucleation rates as simultaneous functions of temperature and water saturation ratio. By representing these quantities as separable products of the analytical functions of temperature and supersaturation, analytical solutions are found for the integral-differential supersaturation equation and concentration of nucleated crystals. Parcel model simulations are used to illustrate the general behavior of various nucleation properties under various conditions, for justifications of the further key analytical simplifications, and for verification of the resulting parameterization. The final parameterization is based upon the values of the supersaturation that determines the current or maximum concentrations of the nucleated ice crystals. The crystal concentration is analytically expressed as a function of time and can be used for parameterization of homogeneous ice nucleation both in the models with small time steps and for substep parameterization in the models with large time steps. The crystal concentration is expressed analytically via the error functions or elementary functions and depends only on the fundamental atmospheric parameters and parameters of classical nucleation theory. The diffusion and kinetic limits of the new parameterization agree with previous semi-empirical parameterizations.
NASA Astrophysics Data System (ADS)
Khvorostyanov, V. I.; Curry, J. A.
2012-03-01
A new analytical parameterization of homogeneous ice nucleation is developed based on extended classical nucleation theory including new equations for the critical radii of the ice germs, free energies and nucleation rates as the functions of the temperature and water saturation ratio simultaneously. By representing these quantities as separable products of the analytical functions of the temperature and supersaturation, analytical solutions are found for the integral-differential supersaturation equation and concentration of nucleated crystals. Parcel model simulations are used to illustrate the general behavior of various nucleation properties under various conditions, for justifications of the further key analytical simplifications, and for verification of the resulting parameterization. The final parameterization is based upon the values of the supersaturation that determines the current or maximum concentrations of the nucleated ice crystals. The crystal concentration is analytically expressed as a function of time and can be used for parameterization of homogeneous ice nucleation both in the models with small time steps and for substep parameterization in the models with large time steps. The crystal concentration is expressed analytically via the error functions or elementary functions and depends only on the fundamental atmospheric parameters and parameters of classical nucleation theory. The diffusion and kinetic limits of the new parameterization agree with previous semi-empirical parameterizations.
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
A Unified EDMF Boundary Layer and Shallow Convection Parameterization in GEOS5
NASA Astrophysics Data System (ADS)
Suselj, K.; Teixeira, J.; Molod, A.
2016-12-01
A unified boundary layer and shallow convection Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, based on the work of Suselj et al. (2013), is implemented and validated in the GEOS5 model. The goal of this study is to improve the simulation of dry and cloudy boundary layers, and shallow moist convection, by developing and implementing a unified parameterization. The mass-flux part of the new parameterization represents non-local transport, which originates in the atmospheric surface layer and extends through the boundary layer and convective layer. The mass-flux part of the model is coupled to the existing eddy-diffusivity boundary layer parameterization and a new probability density function (PDF)-based cloud parameterization. In this work we show the impact of the new parameterization by focusing on boundary layer clouds. We investigate the performance of the single-column version of GEOS5, with and without the new EDMF parameterization, for the CFMIP-GCSS Intercomparison of Large-Eddy and Single-Column Models (CGILS) experiments. The results of the global simulations with EDMF are compared to the control GEOS5 and satellite observations.
A cumulus parameterization including mass fluxes, vertical momentum dynamics, and mesoscale effects
Donner, L.J. )
1993-03-15
A formulation for parameterizing cumulus convection, which treats cumulus vertical momentum dynamics and mass fluxes consistently, is presented. This approach predicts the penetrative extent of cumulus updrafts on the basis of their vertical momentum and provides a basis for treating cumulus microphysics using formulations that depend on vertical velocity. Treatments for cumulus microphysics are essential if the water budgets of convective systems are to be evaluated for treating mesoscale stratiform processes associated with convection, which are important for radiative interactions influencing climate. The water budget of the cumulus updrafts is used to drive a semi-empirical parameterization for the large-scale effects of the mesoscale circulations associated with deep convection. The parameterization was applied to two tropical thermodynamic profiles whose diagnosed forcing by convective systems differed significantly. The deepest of the updrafts penetrated the upper troposphere, while the shallower updrafts penetrated into the region of the mesoscale anvil. The relative numbers of cumulus updrafts of characteristic vertical velocities comprising the parameterized ensemble corresponded well with available observations. The large-scale heating produced by the ensemble without mesoscale circulations was concentrated at lower heights than observed or was characterized by excessive peak magnitudes. An unobserved large-scale source of water vapor was produced in the middle troposphere. When the parameterization for mesoscale effects was added, the large-scale thermal and moisture forcing predicted by the parameterization agreed well with observations for both cases. The significance of mesoscale processes suggests that future cumulus parameterization development will need to treat some radiative processes.
Comparing in situ and satellite-based parameterizations of oceanic whitecaps
NASA Astrophysics Data System (ADS)
Paget, Aaron C.; Bourassa, Mark A.; Anguelova, Magdalena D.
2015-04-01
The majority of the parameterizations developed to estimate whitecap fraction uses a stability-dependent 10 m wind (U10) measured in situ, but recent efforts to use satellite-reported equivalent neutral winds (U10EN) to estimate whitecap fraction with the same parameterizations introduce additional error. This study identifies and quantifies the differences in whitecap parameterizations caused by U10 and U10EN for the active and total whitecap fractions. New power law coefficients are presented for both U10 and U10EN parameterizations based on available in situ whitecap observations. One-way analysis of variance (ANOVA) tests are performed on the residuals of the whitecap parameterizations and the whitecap observations and identify that parameterizations in terms of U10 and U10EN perform similarly. The parameterizations are also tested against the satellite-based WindSat Whitecap Database to assess differences. The improved understanding aids in estimating whitecap fraction globally using satellite products and in determining the global effects of whitecaps on air-sea processes and remote sensing of the surface.
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.
2015-06-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.
Parameterization of temperature and spectral distortions in future CMB experiments
Pitrou, Cyril; Stebbins, Albert
2014-10-15
CMB spectral distortions are induced by Compton collisions with electrons. We review the various schemes to characterize the anisotropic CMB with a non-Planckian spectrum. We advocate using logarithmically averaged temperature moments as the preferred language to describe these spectral distortions, both for theoretical modeling and observations. Numerical modeling is simpler, the moments are frame-independent, and in terms of scattering the mode truncation is exact.
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
Assessment of Noah land surface model with various runoff parameterizations over a Tibetan river
NASA Astrophysics Data System (ADS)
Zheng, Donghai; Van Der Velde, Rogier; Su, Zhongbo; Wen, Jun; Wang, Xin
2017-02-01
Runoff parameterizations currently adopted by the (i) Noah-MP model, (ii) Community Land Model (CLM), and (iii) CLM with variable infiltration capacity hydrology (CLM-VIC) are incorporated into the structure of Noah land surface model, and the impact of these parameterizations on the runoff simulations is investigated for a Tibetan river. Four numerical experiments are conducted with the default Noah and three aforementioned runoff parameterizations. Each experiment is forced with the same set of atmospheric forcing, vegetation, and soil parameters. In addition, the Community Earth System Model database provides the maximum surface saturated area parameter for the Noah-MP and CLM parameterizations. A single-year recurrent spin-up is adopted for the initialization of each model run to achieve equilibrium states. Comparison with discharge measurements shows that each runoff parameterization produces significant differences in the separation of total runoff into surface and subsurface components and that the soil water storage-based parameterizations (Noah and CLM-VIC) outperform the groundwater table-based parameterizations (Noah-MP and CLM) for the seasonally frozen and high-altitude Tibetan river. A parameter sensitivity experiment illustrates that this underperformance of the groundwater table-based parameterizations cannot be resolved through calibration. Further analyses demonstrate that the simulations of other surface water and energy budget components are insensitive to the selected runoff parameterizations, due to the strong control of the atmosphere on simulated land surface fluxes induced by the diurnal dependence of the roughness length for heat transfer and the large water retention capacity of the highly organic top soils over the plateau.
Parameterization of sea-salt optical properties and physics of the associated radiative forcing
NASA Astrophysics Data System (ADS)
Li, J.; Ma, X.; von Salzen, K.; Dobbie, S.
2008-08-01
The optical properties of sea-salt aerosol have been parameterized at shortwave and longwave wavelengths. The optical properties were parameterized in a simple functional form in terms of the ambient relative humidity based on Mie optical property calculations. The proposed parameterization is tested relative to Mie calculations and is found to be accurate to within a few percent. In the parameterization, the effects of the size distribution on the optical properties are accounted for in terms of effective radius of the sea-salt size distribution. This parameterization differs from previous works by being formulated directly with the wet sea-salt size distribution and, to our knowledge, this is the first published sea-salt parameterization to provide a parameterization for both shortwave and longwave wavelengths. We have used this parameterization in a set of idealized 1-D radiative transfer calculations to investigate the sensitivity of various attributes of sea-salt forcing, including the dependency on sea-salt column loading, effective variance, solar angle, and surface albedo. From these sensitivity tests, it is found that sea-salt forcings for both shortwave and longwave spectra are linearly related to the sea-salt loading for realistic values of loadings. The radiative forcing results illustrate that the shortwave forcing is an order of magnitude greater than the longwave forcing results and opposite in sign, for various loadings. Forcing sensitivity studies show that the influence of effective variance for sea-salt is minor; therefore, only one value of effective variance is used in the parameterization. The dependence of sea-salt forcing with solar zenith angle illustrates an interesting result that sea-salt can generate a positive top-of-the-atmosphere result (i.e. warming) when the solar zenith angle is relatively small (i.e. <30°). Finally, it is found that the surface albedo significantly affects the shortwave radiative forcing, with the forcing
Expressive Single Scattering for Light Shaft Stylization.
Kol, Timothy R; Klehm, Oliver; Seidel, Hans-Peter; Eisemann, Elmar
2016-04-14
Light scattering in participating media is a natural phenomenon that is increasingly featured in movies and games, as it is visually pleasing and lends realism to a scene. In art, it may further be used to express a certain mood or emphasize objects. Here, artists often rely on stylization when creating scattering effects, not only because of the complexity of physically correct scattering, but also to increase expressiveness. Little research, however, focuses on artistically influencing the simulation of the scattering process in a virtual 3D scene. We propose novel stylization techniques, enabling artists to change the appearance of single scattering effects such as light shafts. Users can add, remove, or enhance light shafts using occluder manipulation. The colors of the light shafts can be stylized and animated using easily modifiable transfer functions. Alternatively, our system can optimize a light map given a simple user input for a number of desired views in the 3D world. Finally, we enable artists to control the heterogeneity of the underlying medium. Our stylized scattering solution is easy to use and compatible with standard rendering pipelines. It works for animated scenes and can be executed in real time to provide the artist with quick feedback.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
A FLEXIBLE PARAMETERIZATION FOR BASELINE MEAN DEGREE IN MULTIPLE-NETWORK ERGMS
Butts, Carter T.; Almquist, Zack W.
2015-01-01
The conventional exponential family random graph model (ERGM) parameterization leads to a baseline density that is constant in graph order (i.e., number of nodes); this is potentially problematic when modeling multiple networks of varying order. Prior work has suggested a simple alternative that results in constant expected mean degree. Here, we extend this approach by suggesting another alternative parameterization that allows for flexible modeling of scenarios in which baseline expected degree scales as an arbitrary power of order. This parameterization is easily implemented by the inclusion of an edge count/log order statistic along with the traditional edge count statistic in the model specification. PMID:26366012
A note on 'Toward a stochastic parameterization of ocean mesoscale eddies'
NASA Astrophysics Data System (ADS)
Grooms, Ian; Zanna, Laure
2017-05-01
Porta Mana and Zanna (2014) recently proposed a subgrid-scale parameterization for eddy-permitting quasigeostrophic models. In this model the large-scale fluid is represented as a non-Newtonian viscoelastic medium, with a subgrid-stress closure that involves the Lagrangian derivative of large-scale quantities. This note derives this parameterization, including the nondimensional proportionality coefficient, using only two statistical assumptions: that the subgrid-scale term is locally homogeneous and decorrelates rapidly in space. The parameterization is then verified by comparing against eddy-resolving quasigeostrophic simulations, independently reproducing the results of Porta Mana and Zanna in a simpler model.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Elastic scattering phenomenology
NASA Astrophysics Data System (ADS)
Mackintosh, R. S.
2017-04-01
We argue that, in many situations, fits to elastic scattering data that were historically, and frequently still are, considered "good", are not justifiably so describable. Information about the dynamics of nucleon-nucleus and nucleus-nucleus scattering is lost when elastic scattering phenomenology is insufficiently ambitious. It is argued that in many situations, an alternative approach is appropriate for the phenomenology of nuclear elastic scattering of nucleons and other light nuclei. The approach affords an appropriate means of evaluating folding models, one that fully exploits available empirical data. It is particularly applicable for nucleons and other light ions.
Partially strong WW scattering
Cheung Kingman; Chiang Chengwei; Yuan Tzuchiang
2008-09-01
What if only a light Higgs boson is discovered at the CERN LHC? Conventional wisdom tells us that the scattering of longitudinal weak gauge bosons would not grow strong at high energies. However, this is generally not true. In some composite models or general two-Higgs-doublet models, the presence of a light Higgs boson does not guarantee complete unitarization of the WW scattering. After partial unitarization by the light Higgs boson, the WW scattering becomes strongly interacting until it hits one or more heavier Higgs bosons or other strong dynamics. We analyze how LHC experiments can reveal this interesting possibility of partially strong WW scattering.
Towards a parameterization of convective wind gusts in Sahel
NASA Astrophysics Data System (ADS)
Largeron, Yann; Guichard, Françoise; Bouniol, Dominique; Couvreux, Fleur; Birch, Cathryn; Beucher, Florent
2014-05-01
] who focused on the wet tropical Pacific region, and linked wind gusts to convective precipitation rates alone, here, we also analyse the subgrid wind distribution during convective events, and quantify the statistical moments (variance, skewness and kurtosis) in terms of mean wind speed and convective indexes such as DCAPE. Next step of the work will be to formulate a parameterization of the cold pool convective gust from those probability density functions and analytical formulaes obtained from basic energy budget models. References : [Carslaw et al., 2010] A review of natural aerosol interactions and feedbacks within the earth system. Atmospheric Chemistry and Physics, 10(4):1701{1737. [Engelstaedter et al., 2006] North african dust emissions and transport. Earth-Science Reviews, 79(1):73{100. [Knippertz and Todd, 2012] Mineral dust aerosols over the sahara: Meteorological controls on emission and transport and implications for modeling. Reviews of Geophysics, 50(1). [Marsham et al., 2011] The importance of the representation of deep convection for modeled dust-generating winds over west africa during summer.Geophysical Research Letters, 38(16). [Marticorena and Bergametti, 1995] Modeling the atmospheric dust cycle: 1. design of a soil-derived dust emission scheme. Journal of Geophysical Research, 100(D8):16415{16. [Menut, 2008] Sensitivity of hourly saharan dust emissions to ncep and ecmwf modeled wind speed. Journal of Geophysical Research: Atmospheres (1984{2012), 113(D16). [Pierre et al., 2012] Impact of vegetation and soil moisture seasonal dynamics on dust emissions over the sahel. Journal of Geophysical Research: Atmospheres (1984{2012), 117(D6). [Redelsperger et al., 2000] A parameterization of mesoscale enhancement of surface fluxes for large-scale models. Journal of climate, 13(2):402{421.
Advancing x-ray scattering metrology using inverse genetic algorithms
NASA Astrophysics Data System (ADS)
Hannon, Adam F.; Sunday, Daniel F.; Windover, Donald; Joseph Kline, R.
2016-07-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real-space structure in periodic gratings measured using critical dimension small-angle x-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real-space structure of our nanogratings. The study shows that for x-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Advancing X-ray scattering metrology using inverse genetic algorithms.
Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Advancing X-ray scattering metrology using inverse genetic algorithms
Hannon, Adam F.; Sunday, Daniel F.; Windover, Donald; Kline, R. Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting. PMID:27551326
New parameterized model for GPS water vapor tomography
NASA Astrophysics Data System (ADS)
Ding, Nan; Zhang, Shubi; Zhang, Qiuzhao
2017-02-01
Water vapor is the basic parameter used to describe atmospheric conditions. It is rarely contained in the atmosphere during the water cycle, but it is the most active element in rapid space-time changes. Measuring and monitoring the distribution and quantity of water vapor is a necessary task. GPS tomography is a powerful means of providing high spatiotemporal resolution of water vapor density. In this paper, a spatial structure model of a humidity field is constructed using voxel nodes, and new parameterizations for acquiring data about water vapor in the troposphere via GPS are proposed based on inverse distance weighted (IDW) interpolation. Unlike the density of water vapor that is constant within a voxel, the density at a certain point is determined by IDW interpolation. This algorithm avoids the use of horizontal constraints to smooth voxels that are not crossed by satellite rays. A prime number decomposition (PND) access order scheme is introduced to minimize correlation between slant wet delay (SWD) observations. Four experimental schemes for GPS tomography are carried out in dry weather from 2 to 8 August 2015 and rainy days from 9 to 15 August 2015. Using 14 days of data from the Hong Kong Satellite Positioning Reference Station Network (SatRef), the results indicate that water vapor density derived from 4-node methods is more robust than that derived from that of 8 nodes or 12 nodes, or that derived from constant refractivity schemes and the new method has better performance under stable weather conditions than unstable weather (e.g., rainy days). The results also indicate that an excessive number of interpolations in each layer reduce accuracy. However, the accuracy of the tomography results is gradually reduced with increases in altitude below 7000 m. Moreover, in the case of altitudes between 7000 m and the upper boundary layer, the accuracy can be improved by a boundary constraint.
Parameterization of small intestinal water volume using PBPK modeling.
Maharaj, Anil; Fotaki, Nikoletta; Edginton, Andrea
2015-01-25
To facilitate accurate predictions of oral drug disposition, mechanistic absorption models require optimal parameterization. Furthermore, parameters should maintain a biological basis to establish confidence in model predictions. This study will serve to calculate an optimal parameter value for small intestinal water volume (SIWV) using a model-based approach. To evaluate physiologic fidelity, derived volume estimates will be compared to experimentally-based SIWV determinations. A compartmental absorption and transit (CAT) model, created in Matlab-Simulink®, was integrated with a whole-body PBPK model, developed in PK-SIM 5.2®, to provide predictions of systemic drug disposition. SIWV within the CAT model was varied between 52.5mL and 420mL. Simulations incorporating specific SIWV values were compared to pharmacokinetic data from compounds exhibiting solubility induced non-proportional changes in absorption using absolute average fold-error. Correspondingly, data pertaining to oral administration of acyclovir and chlorothiazide were utilized to derive estimates of SIWV. At 400mg, a SIWV of 116mL provided the best estimates of acyclovir plasma concentrations. A similar SIWV was found to best depict the urinary excretion pattern of chlorothiazide at a dose of 100mg. In comparison, experimentally-based estimates of SIWV within adults denote a central tendency between 86 and 167mL. The derived SIWV (116mL) represents the optimal parameter value within the context of the developed CAT model. This result demonstrates the biological basis of the widely utilized CAT model as in vivo SIWV determinations correspond with model-based estimates. Copyright © 2014 Elsevier B.V. All rights reserved.
Carbon dioxide and climate: The impact of cloud parameterization
Senior, C.A.; Mitchell, J.F.B. )
1993-03-01
The importance of the representation of cloud in a general circulation model is investigated by utilizing four different parameterization schemes for layer cloud in a low-resolution version of the general circulation model at the Hadley Centre for Climate Prediction and Research at the United Kingdom Meteorological Office. The performance of each version of the model in terms of cloud and radiation is assessed in relation to satellite data from the Earth Radiation Budget Experiment (ERBE). Schemes that include a prognostic cloud water variable show some improvement on those with relative humidity-dependent cloud, but all still show marked differences from the ERBE data. The sensitivity of each of the versions of the model to a doubling of atmospheric CO[sub 2] is investigated. Midlevel and lower-level clouds decrease when cloud is dependent on relative humidity, and this constitutes a strong positive feedback. When interactive cloud water is included, however, this effect is almost entirely compensated for by a negative feedback from the change of phase of cloud water from ice to water. Additional negative feedbacks are found when interactive radiative properties of cloud are included and these lead to an overall negative cloud feedback. The global warming produced with the four models then ranges from 5.4[degrees] with a relative humidity scheme to 1.9[degrees]C with interactive cloud water and radiative properties. Improving the treatment of ice cloud based on observations increases the model's sensitivity slightly to 2.1[degrees]C. Using an energy balance model, it is estimated that the climate sensitivity using the relative humidity scheme along with the negative feedback from cloud radiative properties would be 2.8[degrees]C. Thus, 2.8 [degrees]-2.1[degrees]C appears to be a better estimate of the range of equilibrium response to a doubling of CO[sub 2].
Evapotranspiration Parameterizations at a Grass Site in Florida, USA
NASA Astrophysics Data System (ADS)
Rizou, M.; Sumner, D. M.; Nnadi, F.
2007-05-01
In spite of the fact that grasslands account for about 40% of the ice-free global terrestrial land cover, their contribution to the surface exchanges of energy and water in local and regional scale is so far uncertain. In this study, the sensitivity of evapotranspiration (ET) and other energy fluxes to wetness variables, namely the volumetric Soil Water Content (SWC) and Antecedent Precipitation Index (API), over a non-irrigated grass site in Central Florida, USA (28.049 N, 81.400 W) were investigated. Eddy correlation and soil water content measurements were taken by USGS (U.S. Geological Survey) at the grass study site, within 100 m of a SFWMD (South Florida Water Management District) weather station. The soil is composed of fine sands and it is mainly covered by Paspalum notatum (bahia grass). Variable soil wetness conditions with API bounds of about 2 to 160 mm and water table levels of 0.03 to 1.22 m below ground surface, respectively, were observed throughout the year 2004. The Bowen ratio exhibited an average of 1 and values larger than 2 during few dry days. The daytime average ET was classified into two stages, first stage (energy-limited) and second stage (water- limited) based on the water availability. The critical values of API and SWC were found to be about 56 mm and 0.17 respectively, with the second one being approximately 33% of the SWC at saturation. The ET values estimated by the simple Priestley-Taylor (PT) method were compared to the actual values. The PT coefficient varied from a low bound of approximately 0.4 to a peak of 1.21. Simple relationships for the PT empirical factor were employed in terms of SWC and API to improve the accuracy of the second stage observations. The results of the ET parameterizations closely match eddy-covariance flux values on daily and longer time steps.
Observations and parameterization of the stratospheric electrical conductivity
NASA Astrophysics Data System (ADS)
Hu, Hua; Holzworth, Robert H.
1996-12-01
conductivity is parameterized based on the measurements, and a simple empirical model is presented in geographic coordinates.
Search for subgrid scale parameterization by projection pursuit regression
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Moin, Parviz
1992-01-01
The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important
Parameterizations for shielding electron accelerators based on Monte Carlo studies
P. Degtyarenko; G. Stapleton
1996-10-01
Numerous recipes for designing lateral slab neutron shielding for electron accelerators are available and each generally produces rather similar results for shield thicknesses of about 2 m of concrete and for electron beams with energy in the 1 to 10 GeV region. For thinner or much thicker shielding the results tend to diverge and the standard recipes require modification. Likewise for geometries other than lateral to the beam direction further corrections are required so that calculated results are less reliable and hence additional and costly conservatism is needed. With the adoption of Monte Carlo (MC) methods of transporting particles a much more powerful way of calculating radiation dose rates outside shielding becomes available. This method is not constrained by geometry, although deep penetration problems need special statistical treatment, and is an excellent approach to solving any radiation transport problem providing the method has been properly checked against measurements and is free from the well known errors common to such computer methods. This present paper utilizes the results of MC calculations based on a nuclear fragmentation model named DINREG using the MC transport code GEANT and models them with the normal two parameter shielding expressions. Because the parameters can change with electron beam energy, angle to the electron beam direction and target material, the parameters are expressed as functions of some of these variables to provide a universal equations for shielding electron beams which can used rather simply for deep penetration problems in simple geometry without the time consuming computations needed in the original MC programs. A particular problem with using simple parameterizations based on the uncollided flux is that approximations based on spherical geometry might not apply to the more common cylindrical cases used for accelerator shielding. This source of error has been discussed at length by Stevenson and others. To study
A statistically derived parameterization for the collagen triple-helix.
Rainey, Jan K; Goh, M Cynthia
2002-11-01
The triple-helix is a unique secondary structural motif found primarily within the collagens. In collagen, it is a homo- or hetero-tripeptide with a repeating primary sequence of (Gly-X-Y)(n), displaying characteristic peptide backbone dihedral angles. Studies of bulk collagen fibrils indicate that the triple-helix must be a highly repetitive secondary structure, with very specific constraints. Primary sequence analysis shows that most collagen molecules are primarily triple-helical; however, no high-resolution structure of any entire protein is yet available. Given the drastic morphological differences in self-assembled collagen structures with subtle changes in assembly conditions, a detailed knowledge of the relative locations of charged and sterically bulky residues in collagen is desirable. Its repetitive primary sequence and highly conserved secondary structure make collagen, and the triple-helix in general, an ideal candidate for a general parameterization for prediction of residue locations and for the use of a helical wheel in the prediction of residue orientation. Herein, a statistical analysis of the currently available high-resolution X-ray crystal structures of model triple-helical peptides is performed to produce an experimentally based parameter set for predicting peptide backbone and C(beta) atom locations for the triple-helix. Unlike existing homology models, this allows easy prediction of an entire triple-helix structure based on all existing high-resolution triple-helix structures, rather than only on a single structure or on idealized parameters. Furthermore, regional differences based on the helical propensity of residues may be readily incorporated. The parameter set is validated in terms of the predicted bond lengths, backbone dihedral angles, and interchain hydrogen bonding.
Parameterization of a Geometric Flow Implicit Solvation Model
Thomas, Dennis G.; Chun, Jaehun; Chen, Zhan; Wei, Guowei; Baker, Nathan A.
2012-01-01
Implicit solvent models are popular for their high computational efficiency and simplicity over explicit solvent models and are extensively used for computing molecular solvation properties. The accuracy of implicit solvent models depends on the geometric description of the solute-solvent interface and the solvent dielectric profile that is defined near the surface of the solute molecule. Typically, it is assumed that the dielectric profile is spatially homogeneous in the bulk solvent medium and varies sharply across the solute-solvent interface. However, the specific form of this profile is often described by ad hoc geometric models rather than physical solute-solvent interactions. Hence, it is of significant interest to improve the accuracy of these implicit solvent models by more realistically defining the solute-solvent boundary within a continuum setting. Recently, a differential geometry-based geometric flow solvation model was developed, in which the polar and nonpolar free energies are coupled through a characteristic function that describes a smooth dielectric interface profile across the solvent–solute boundary in a thermodynamically self-consistent fashion. The main parameters of the model are the solute/solvent dielectric coefficients, solvent pressure on the solute, microscopic surface tension, solvent density, and molecular force-field parameters. In this work, we investigate how changes in the pressure, surface tension, solute dielectric coefficient, and choice of different force-field charge and radii parameters affect the prediction accuracy for hydration free energies of 17 small organic molecules based on the geometric flow solvation model. The results of our study provide insights on the parameterization, accuracy, and predictive power of this new implicit solvent model. PMID:23212974
Structural parameterization of the binding enthalpy of small ligands.
Luque, Irene; Freire, Ernesto
2002-11-01
A major goal in ligand and drug design is the optimization of the binding affinity of selected lead molecules. However, the binding affinity is defined by the free energy of binding, which, in turn, is determined by the enthalpy and entropy changes. Because the binding enthalpy is the term that predominantly reflects the strength of the interactions of the ligand with its target relative to those with the solvent, it is desirable to develop ways of predicting enthalpy changes from structural considerations. The application of structure/enthalpy correlations derived from protein stability data has yielded inconsistent results when applied to small ligands of pharmaceutical interest (MW < 800). Here we present a first attempt at an empirical parameterization of the binding enthalpy for small ligands in terms of structural information. We find that at least three terms need to be considered: (1) the intrinsic enthalpy change that reflects the nature of the interactions between ligand, target, and solvent; (2) the enthalpy associated with any possible conformational change in the protein or ligand upon binding; and, (3) the enthalpy associated with protonation/deprotonation events, if present. As in the case of protein stability, the intrinsic binding enthalpy scales with changes in solvent accessible surface areas. However, an accurate estimation of the intrinsic binding enthalpy requires explicit consideration of long-lived water molecules at the binding interface. The best statistical structure/enthalpy correlation is obtained when buried water molecules within 5-7 A of the ligand are included in the calculations. For all seven protein systems considered (HIV-1 protease, dihydrodipicolinate reductase, Rnase T1, streptavidin, pp60c-Src SH2 domain, Hsp90 molecular chaperone, and bovine beta-trypsin) the binding enthalpy of 25 small molecular weight peptide and nonpeptide ligands can be accounted for with a standard error of +/-0.3 kcal x mol(-1). Copyright 2002 Wiley
Singularity-consistent parameterization of robot motion and control
Nenchev, D.N.; Tsumaki, Yuichi; Uchiyama, Masaru
2000-02-01
The inverse kinematics problem is formulated as a parameterized autonomous dynamical system problem, and respective analysis is carried out. It is shown that a singular point of work space can be mapped either as a critical or a noncritical point of the autonomous system, depending on the direction of approach to the singular point. Making use of the noncritical mapping, a closed-loop kinematic controller with asymptotic stability and velocity limits along degenerate singular or near-singular paths is designed. The authors introduce a specific type of motion along the reference path, the so-called natural motion. This type of motion is obtained in a straightforward manner from the autonomous dynamical system and always satisfies the motion constraint at a singular point. In the vicinity of the singular point, natural motion slows down the end-effector speed and keeps the joint velocity bounded. Thus, no special trajectory replanning will be required. In addition, the singular manifold can be crossed, if necessary. Further on, it is shown that natural motion constitutes an integrable motion component. The remaining, nonintegrable motion component is shown to be helpful in solving a problem related to the critical point mapping of the autonomous system. The authors design a singularity-consistent resolved acceleration controller, which they then apply to singular or near-singular trajectory tracking under torque limits. Finally, the authors compare the main features of the singularity-consistent method and the damped-least-squares method. It is shown that both methods introduce a so-called algorithmic error in the vicinity of a singular point. The direction of this error is, however, different in each method. This is shown to play an important role for system stability.
Evapotranspiration parameterizations at a grass site in Florida, USA
Rizou, M.; Sumner, David M.; Nnadi, F.
2007-01-01
In spite of the fact that grasslands account for about 40% of the ice-free global terrestrial land cover, their contribution to the surface exchanges of energy and water in local and regional scale is so far uncertain. In this study, the sensitivity of evapotranspiration (ET) and other energy fluxes to wetness variables, namely the volumetric Soil Water Content (SWC) and Antecedent Precipitation Index (API), over a non-irrigated grass site in Central Florida, USA (28.049 N, 81.400 W) were investigated. Eddy correlation and soil water content measurements were taken by USGS (U.S. Geological Survey) at the grass study site, within 100 m of a SFWMD (South Florida Water Management District) weather station. The soil is composed of fine sands and it is mainly covered by Paspalum notatum (bahia grass). Variable soil wetness conditions with API bounds of about 2 to 160 mm and water table levels of 0.03 to 1.22 m below ground surface, respectively, were observed throughout the year 2004. The Bowen ratio exhibited an average of 1 and values larger than 2 during few dry days. The daytime average ET was classified into two stages, first stage (energy-limited) and second stage (water- limited) based on the water availability. The critical values of API and SWC were found to be about 56 mm and 0.17 respectively, with the second one being approximately 33% of the SWC at saturation. The ET values estimated by the simple Priestley-Taylor (PT) method were compared to the actual values. The PT coefficient varied from a low bound of approximately 0.4 to a peak of 1.21. Simple relationships for the PT empirical factor were employed in terms of SWC and API to improve the accuracy of the second stage observations. The results of the ET parameterizations closely match eddy-covariance flux values on daily and longer time steps.
Cirrus cloud model parameterizations: Incorporating realistic ice particle generation
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Dodd, G. C.; Starr, David OC.
1990-01-01
Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.
Parameterizations of Dry Deposition for the Industrial Source Complex Model
NASA Astrophysics Data System (ADS)
Wesely, M. L.; Doskey, P. V.; Touma, J. S.
2002-05-01
Improved algorithms have been developed to simulate the dry deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex model system. The dry deposition velocities are described in conventional resistance schemes, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake of gases at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. Standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed to provide a means to evaluate the role of lipid solubility on uptake by the waxy outer cuticle of vegetative plant leaves. The dry deposition velocities of particulate HAPs are simulated with a resistance scheme in which deposition velocity is described for two size modes: a fine mode with particles less than about 2.5 microns in diameter and a coarse mode with larger particles but excluding very coarse particles larger than about 10 microns in diameter. For the fine mode, the deposition velocity is calculated with a parameterization based on observations of sulfate dry deposition. For the coarse mode, a representative settling velocity is assumed. Then the total deposition velocity is estimated as the sum of the two deposition velocities weighted according to the amount of mass expected in the two modes.
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and
NASA Astrophysics Data System (ADS)
Stover, John C.
1991-12-01
Optical scatter is a bothersome source of optical noise, limits resolution and reduces system throughput. However, it is also an extremely sensitive metrology tool. It is employed in a wide variety of applications in the optics industry (where direct scatter measurement is of concern) and is becoming a popular indirect measurement in other industries where its measurement in some form is an indicator of another component property - like roughness, contamination or position. This paper presents a brief review of the current state of this technology as it emerges from university and government laboratories into more general industry use. The bidirectional scatter distribution function (or BSDF) has become the common format for expressing scatter data and is now used almost universally. Measurements made at dozens of laboratories around the country cover the spectrum from the uv to the mid- IR. Data analysis of optical component scatter has progressed to the point where a variety of analysis tools are becoming available for discriminating between the various sources of scatter. Work has progressed on the analysis of rough surface scatter and the application of these techniques to some challenging problems outside the optical industry. Scatter metrology is acquiring standards and formal test procedures. The available scatter data base is rapidly expanding as the number and sophistication of measurement facilities increases. Scatter from contaminants is continuing to be a major area of work as scatterometers appear in vacuum chambers at various laboratories across the country. Another area of research driven by space applications is understanding the non-topographic sources of mid-IR scatter that are associated with Beryllium and other materials. The current flurry of work in this growing area of metrology can be expected to continue for several more years and to further expand to applications in other industries.
Parameterizing Aggregation Rates: Results of cold temperature ice-ash hydrometeor experiments
NASA Astrophysics Data System (ADS)
Courtland, L. M.; Dufek, J.; Mendez, J. S.; McAdams, J.
2014-12-01
Recent advances in the study of tephra aggregation have indicated that (i) far-field effects of tephra sedimentation are not adequately resolved without accounting for aggregation processes that preferentially remove the fine ash fraction of volcanic ejecta from the atmosphere as constituent pieces of larger particles, and (ii) the environmental conditions (e.g. humidity, temperature) prevalent in volcanic plumes may significantly alter the types of aggregation processes at work in different regions of the volcanic plume. The current research extends these findings to explore the role of ice-ash hydrometeor aggregation in various plume environments. Laboratory experiments utilizing an ice nucleation chamber allow us to parameterize tephra aggregation rates under the cold (0 to -50 C) conditions prevalent in the upper regions of volcanic plumes. We consider the interaction of ice-coated tephra of variable thickness grown in a controlled environment. The ice-ash hydrometers interact collisionally and the interaction is recorded by a number of instruments, including high speed video to determine if aggregation occurs. The electric charge on individual particles is examined before and after collision to examine the role of electrostatics in the aggregation process and to examine the charge exchange process. We are able to examine how sticking efficiency is related to both the relative abundance of ice on a particle as well as to the magnitude of the charge carried by the hydrometeor. We here present preliminary results of these experiments, the first to constrain aggregation efficiency of ice-ash hydrometeors, a parameter that will allow tephra dispersion models to use near-real-time meteorological data to better forecast particle residence time in the atmosphere.
A Parameterization of Solar Energy Disposition in a Climate Model Using an EBM
NASA Astrophysics Data System (ADS)
Wang, Z.; Hu, R.; Mysak, L. A.
2002-12-01
During the past decade, a class of climate models of reduced complexity (also termed EMICs: Earth system Models of Intermediate Complexity), which employs an energy balance model as the atmospheric component, has been developed. However, the solar energy disposition in the subcomponents in these climate models has never been rigorously parameterized. In this study, an atmospheric radiative-convective model is used to parameterize the integrated reflectivity and transmissivity of an atmospheric column in terms of precipitable water, cloud properties and aerosols. Then, for a prescribed surface albedo, the solar energy disposition can be calculated using the parameterized reflectivity and transmissivity. In this calculation, we use climatology data from ISCCP (International Satelite Cloud Climatology Project), ERA-15 (ECMWF 15-year Renalysis) and PATMOS (Pathfinder Atmosphere). This solar energy disposition calculated using these parameterized reflectivity and transmissivity is tested against that observed in the Earth Radiation Budget Experiment.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model
This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
NASA Technical Reports Server (NTRS)
Stephenson-Graves, D.
1982-01-01
An analysis is performed to qualitatively compare the seasonal variation in emitted longwave radiation over land and over water areas as determined from 12 months of Nimbus 6 satellite data with that defined from parameterizations of this radiation budget component. These variations are noted when land and water surface areas are mapped to corresponding areas at the 'top' of the atmosphere. Variations of a surface-temperature-dependent parameterization of emitted longwave radiation originally suggested by Budyko (1969) are considered. The longwave radiation parameterizations indicate small differences between land and water profiles of emitted longwave radiation at the top of an atmospheric column in low latitudes in comparison to large differences in this feature shown to exist in the satellite data. The small differences are noted in linear parameterizations of emitted flux when zonally-averaged satellite data are used to define equation coefficients.
Adaptive multi-scale parameterization for one-dimensional flow in unsaturated porous media
NASA Astrophysics Data System (ADS)
Hayek, Mohamed; Lehmann, François; Ackerer, Philippe
2008-01-01
In the analysis of the unsaturated zone, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the porous media. Adaptative multi-scale parameterization consists in solving the problem through successive approximations by refining the parameter at the next finer scale all over the domain and stopping the process when the refinement does not induce significant decrease of the objective function any more. In this context, the refinement indicators algorithm provides an adaptive parameterization technique that opens the degrees of freedom in an iterative way driven at first order by the model to locate the discontinuities of the sought parameters. We present a refinement indicators algorithm for adaptive multi-scale parameterization that is applicable to the estimation of multi-dimensional hydraulic parameters in unsaturated soil water flow. Numerical examples are presented which show the efficiency of the algorithm in case of noisy data and missing data.
NASA Technical Reports Server (NTRS)
Stephenson-Graves, D.
1982-01-01
An analysis is performed to qualitatively compare the seasonal variation in emitted longwave radiation over land and over water areas as determined from 12 months of Nimbus 6 satellite data with that defined from parameterizations of this radiation budget component. These variations are noted when land and water surface areas are mapped to corresponding areas at the 'top' of the atmosphere. Variations of a surface-temperature-dependent parameterization of emitted longwave radiation originally suggested by Budyko (1969) are considered. The longwave radiation parameterizations indicate small differences between land and water profiles of emitted longwave radiation at the top of an atmospheric column in low latitudes in comparison to large differences in this feature shown to exist in the satellite data. The small differences are noted in linear parameterizations of emitted flux when zonally-averaged satellite data are used to define equation coefficients.
Zhang, Guang J.
2016-11-07
The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.
The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...
Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model
This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
Submoment expansion of neutron-scattering sources
Williams, M.L.
2000-02-01
The submoment method was originally introduced to compute spherical harmonic moments of the neutron elastic-scattering source for discrete ordinates calculations with pointwise nuclear data. This work extends the submoment method to include discrete-level inelastic, as well as elastic, S-wave reactions. New applications of the submoment expansion to compute spherical harmonic moments of the slowing-down density and the elastic removal rate are also presented. Numerical stability and computational considerations are discussed.
Wavevector and energy resolution of the polarized diffuse scattering spectrometer D7
NASA Astrophysics Data System (ADS)
Fennell, T.; Mangin-Thro, L.; Mutka, H.; Nilsen, G. J.; Wildes, A. R.
2017-06-01
The instrumental divergence parameters and resolution for the D7 neutron diffuse scattering spectrometer at the Institut Laue-Langevin, France, are presented. The resolution parameters are calibrated against measurements of powders, single crystals, and the incoherent scattering from vanadium. We find that the powder diffraction resolution is well described by the Cagliotti function, the single crystal resolution function can be parameterized using the Cooper-Nathans formalism, and that in time-of-flight mode the energy resolution is consistent with monochromatic focussing.
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional
NASA Astrophysics Data System (ADS)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; Morton, Don; Hinzman, Larry; Nijssen, Bart
2017-09-01
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which better represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties - including the distribution of permafrost and vegetation cover heterogeneity - are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to
Hartemann, F V
2008-12-01
An overview of linear and nonlinear Compton scattering is presented, along with a comparison with Thomson scattering. Two distinct processes play important roles in the nonlinear regime: multi-photon interactions, leading to the generation of harmonics, and radiation pressure, yielding a downshift of the radiated spectral features. These mechanisms, their influence on the source brightness, and different modeling strategies are also briefly discussed.
Limitations in scatter propagation
NASA Astrophysics Data System (ADS)
Lampert, E. W.
1982-04-01
A short description of the main scatter propagation mechanisms is presented; troposcatter, meteor burst communication and chaff scatter. For these propagation modes, in particular for troposcatter, the important specific limitations discussed are: link budget and resulting hardware consequences, diversity, mobility, information transfer and intermodulation and intersymbol interference, frequency range and future extension in frequency range for troposcatter, and compatibility with other services (EMC).
1979-09-01
Equation .................................. 35 Boundary Conditions ................................ 37SIrradiance at Cloud Exit...mathematical description of the multiple scatter- ing problem is given by the nonstationary radiative transport equation of Chandrasekhar [2]. Written in...function, S0 is the source function, and X is the single-scatter albedo. Unfortunately, the nonstationary transport equation has not been solved in a
A New Visibility Parameterization for Warm-Fog Applications in Numerical Weather Prediction Models
NASA Astrophysics Data System (ADS)
Gultepe, I.; Müller, M. D.; Boybeyi, Z.
2006-11-01
The objective of this work is to suggest a new warm-fog visibility parameterization scheme for numerical weather prediction (NWP) models. In situ observations collected during the Radiation and Aerosol Cloud Experiment, representing boundary layer low-level clouds, were used to develop a parameterization scheme between visibility and a combined parameter as a function of both droplet number concentration Nd and liquid water content (LWC). The current NWP models usually use relationships between extinction coefficient and LWC. A newly developed parameterization scheme for visibility, Vis = f(LWC, Nd), is applied to the NOAA Nonhydrostatic Mesoscale Model. In this model, the microphysics of fog was adapted from the 1D Parameterized Fog (PAFOG) model and then was used in the lower 1.5 km of the atmosphere. Simulations for testing the new parameterization scheme are performed in a 50-km innermost-nested simulation domain using a horizontal grid spacing of 1 km centered on Zurich Unique Airport in Switzerland. The simulations over a 10-h time period showed that visibility differences between old and new parameterization schemes can be more than 50%. It is concluded that accurate visibility estimates require skillful LWC as well as Nd estimates from forecasts. Therefore, the current models can significantly over-/underestimate Vis (with more than 50% uncertainty) depending on environmental conditions. Inclusion of Nd as a prognostic (or parameterized) variable in parameterizations would significantly improve the operational forecast models.
New Melting Parameterization for Geodynamic Modelling: Preliminary Results Applied to Plume Setting
NASA Astrophysics Data System (ADS)
Manjón-Cabeza Córdoba, Antonio; Ballmer, Maxim D.
2017-04-01
Melting poses a challenge in geodynamic numerical modelling: thermodynamic models are computationally expensive and they present serious restrictions as far as P-T conditions are concerned; on the other hand, simple parameterizations usually cannot address major element contents of melts, and thus physical properties. Here, we present a new polynomic parameterization based on pMELTS [Ghiorso et. al., 2002] to be used in geodynamic models. In addition, we show a first application to a geodynamic model. Our parameterization is adapted for continuous melt fractionation under decompression. The input parameters are initial pressure of melting, pressure, critical porosity, water content and temperature. The parameterization can be further calibrated for different rock compositions. It yields as the amount of melt retained in the rock, total degree of melting plus major element compositions in the form of wt% of oxides, such as: SiO2, MgO, FeO, CaO, Al2O3 and Na2O. The parameterization has the same limitations as the thermodynamic model on which it is based (MELTS), and somewhat bigger errors due to statistical fitting. In turn, it involves advantages in terms of computational speed, and ease of implementation. Most importantly, extrapolation of the model along this parameterization can provide statistically meaningful results. To demonstrate this, we benchmark these results with high pressure melting experiments. Finally, we show first applications of our parameterization as it is coupled to simple thermomechanical plume models. In these models, different melt compositions are obtained when changing potential temperature, plume buoyancy flux, and plume temperature. Although the parameterization errors are probably too high for petrological ends (where MELTS and pMELTS should be used instead), it presents an efficient and suitable option for geodynamic models.
Zero-D sensitivity studies with the NCAR CCM land surface parameterization scheme
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Wilson, M. F.; Dickinson, R. E.
1986-05-01
The boundary package of a version of the NCAR Community Climate Model was run as a stand alone zero-dimensional model. Soil data and a soil parameterization scheme were added to the vegetation parameterization. Sensitivity experiments, including conditions representative of a low latitude evergreen forest, a sand desert, a high latitude coniferous forest, high latitude tundra, and prairie grassland were undertaken. The land surface scheme shows the greatest sensitivity to soil texture variation, particularly to changes in hydraulic conductivity and diffusivity.
2008-01-01
develop fast, accurate parameterizations of strato - spheric ozone photochemistry (McCormack et al., 2004, 2006). MacKenzie and Harwood (2004...effect of the CHEM2D-H2O and ECMWF photochemistry parameterizations on strato - spheric water vapor, where the relevant photochemical time scales are much...conditions that lead to the formation of polar mesospheric clouds. The role of addi- tional physical processes such as molecular diffusion, which is not
Turbulence Parameterizations for Convective Boundary Layers in High-Resolution Mesoscale Models
2003-12-01
radars are especially dependent on clear weather conditions for effective operations. For example, dust storms and low cloud cover were weather events...PAGES 160 14. SUBJECT TERMS Grid Resolution, Parameterizations, Boundary Layer, Mesoscale Modeling, COAMPS . 16. PRICE CODE 17. SECURITY...Parameterizations in COAMPS using aircraft measurements. This work was also supported in part by a grant of computer time from the DOD high
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
Parameterized spectral distributions for meson production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.
1995-01-01
Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.
A parameterization for the absorption of solar radiation by water vapor in the earth's atmosphere
NASA Technical Reports Server (NTRS)
Wang, W.-C.
1976-01-01
A parameterization for the absorption of solar radiation as a function of the amount of water vapor in the earth's atmosphere is obtained. Absorption computations are based on the Goody band model and the near-infrared absorption band data of Ludwig et al. A two-parameter Curtis-Godson approximation is used to treat the inhomogeneous atmosphere. Heating rates based on a frequently used one-parameter pressure-scaling approximation are also discussed and compared with the present parameterization.
Purely bianisotropic scatterers
NASA Astrophysics Data System (ADS)
Albooyeh, M.; Asadchy, V. S.; Alaee, R.; Hashemi, S. M.; Yazdi, M.; Mirmoosa, M. S.; Rockstuhl, C.; Simovski, C. R.; Tretyakov, S. A.
2016-12-01
The polarization response of molecules or meta-atoms to external electric and magnetic fields, which defines the electromagnetic properties of materials, can either be direct (electric field induces electric moment and magnetic field induces magnetic moment) or indirect (magnetoelectric coupling in bianisotropic scatterers). Earlier studies suggest that there is a fundamental bound on the indirect response of all passive scatterers: It is believed to be always weaker than the direct one. In this paper, we prove that there exist scatterers which overcome this bound substantially. Moreover, we show that the amplitudes of electric and magnetic polarizabilities can be negligibly small as compared to the magnetoelectric coupling coefficients. However, we prove that if at least one of the direct-excitation coefficients vanishes, magnetoelectric coupling effects in passive scatterers cannot exist. Our findings open a way to a new class of electromagnetic scatterers and composite materials.
NASA Astrophysics Data System (ADS)
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-01
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud-aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it is anticipated
Using a resolution function to regulate parameterizations of oceanic mesoscale eddy effects
NASA Astrophysics Data System (ADS)
Hallberg, Robert
2013-12-01
Mesoscale eddies play a substantial role in the dynamics of the ocean, but the dominant length-scale of these eddies varies greatly with latitude, stratification and ocean depth. Global numerical ocean models with spatial resolutions ranging from 1° down to just a few kilometers include both regions where the dominant eddy scales are well resolved and regions where the model's resolution is too coarse for the eddies to form, and hence eddy effects need to be parameterized. However, common parameterizations of eddy effects via a Laplacian diffusion of the height of isopycnal surfaces (a Gent-McWilliams diffusivity) are much more effective at suppressing resolved eddies than in replicating their effects. A variant of the Phillips model of baroclinic instability illustrates how eddy effects might be represented in ocean models. The ratio of the first baroclinic deformation radius to the horizontal grid spacing indicates where an ocean model could explicitly simulate eddy effects; a function of this ratio can be used to specify where eddy effects are parameterized and where they are explicitly modeled. One viable approach is to abruptly disable all the eddy parameterizations once the deformation radius is adequately resolved; at the discontinuity where the parameterization is disabled, isopycnal heights are locally flattened on the one side while eddies grow rapidly off of the enhanced slopes on the other side, such that the total parameterized and eddy fluxes vary continuously at the discontinuity in the diffusivity. This approach should work well with various specifications for the magnitude of the eddy diffusivities.
Xie, S.; Cederwall, R.T.; Yio, J.J.; Xu, K.M.
2001-05-17
Parameterization of cumulus convection in general circulation model (GCM) has been recognized as one of the most important and complex issues in the model physical parameterizations. In earlier studies, most cumulus parameterizations were developed and evaluated using data observed over tropical oceans, such as the GATE (the Global Atmospheric Research Program's Atlantic Tropical Experiment) data. This is partly due to inadequate field measurements in the midlatitudes. In this study, we compare and evaluate a total of eight types of the state-of-the-art cumulus parameterizations used in fifteen Single-Column Models (SCM) under the summertime midlatitude continental conditions using the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) summer 1997 Intensive Operational Period (IOP) data, which covers several continental convection events. The purpose is to systematically compare and evaluate the performance of these cumulus parameterizations under summertime midlatitude continental conditions. Through the study we hope to identify strengths and weaknesses of these cumulus parameterizations that will lead to further improvements. Here, we briefly present our most interesting results. A full description of this study can be seen in Xie et al. (2001).
NASA Astrophysics Data System (ADS)
Liu, Yixiong; Yang, Ce; Song, Xiancheng
2015-04-01
A new airfoil shape parameterization method is developed, which extended the Bezier curve to the generalized form with adjustable shape parameters. The local control parameters at airfoil leading and trailing edge regions are enhanced, where have significant effect on the aerodynamic performance of wind turbine. The results show this improved parameterization method has advantages in the fitting characteristics of geometry shape and aerodynamic performance comparing with other three common airfoil parameterization methods. The new parameterization method is then applied to airfoil shape optimization for wind turbine using Genetic Algorithm (GA), and the wind turbine special airfoil, DU93-W-210, is optimized to achieve the favorable Cl/Cd at specified flow conditions. The aerodynamic characteristic of the optimum airfoil is obtained by solving the RANS equations in computational fluid dynamics (CFD) method, and the optimization convergence curves show that the new parameterization method has good convergence rate in less number of generations comparing with other methods. It is concluded that the new method not only has well controllability and completeness in airfoil shape representation and provides more flexibility in expressing the airfoil geometry shape, but also is capable to find efficient and optimal wind turbine airfoil. Additionally, it is shown that a suitable parameterization method is helpful for improving the convergence rate of the optimization algorithm.
Neutron scattering and models : molybdenum.
Smith, A.B.
1999-05-26
A comprehensive interpretation of the fast-neutron interaction with elemental and isotopic molybdenum at energies of {le} 30 MeV is given. New experimental elemental-scattering information over the incident energy range 4.5 {r_arrow} 10 MeV is presented. Spherical, vibrational and dispersive models are deduced and discussed, including isospin, energy-dependent and mass effects. The vibrational models are consistent with the ''Lane potential''. The importance of dispersion effects is noted. Dichotomies that exist in the literature are removed. The models are vehicles for fundamental physical investigations and for the provision of data for applied purposes. A ''regional'' molybdenum model is proposed. Finally, recommendations for future work are made.
NASA Astrophysics Data System (ADS)
Hütt, M.-Th.; L'vov, A. I.; Milstein, A. I.; Schumacher, M.
2000-01-01
The concept of Compton scattering by even-even nuclei from giant-resonance to nucleon-resonance energies and the status of experimental and theoretical researches in this field are outlined. The description of Compton scattering by nuclei starts from different complementary approaches, namely from second-order S-matrix and from dispersion theories. Making use of these, it is possible to incorporate into the predicted nuclear scattering amplitudes all the information available from other channels, viz. photon-nucleon and photon-meson channels, and to efficiently make use of models of the nucleon, the nucleus and the nucleon-nucleon interaction. The total photoabsorption cross section constrains the nuclear scattering amplitude in the forward direction. The specific information obtained from Compton scattering therefore stems from the angular dependence of the nuclear scattering amplitude, providing detailed insight into the dynamics of the nuclear and nucleon degrees of freedom and into the interplay between them. Nuclear Compton scattering in the giant-resonance energy-region provides information on the dynamical properties of the in-medium mass of the nucleon. Most prominently, the electromagnetic polarizabilities of the nucleon in the nuclear medium can be extracted from nuclear Compton scattering data obtained in the quasi-deuteron energy-region. In our description of this latter process special emphasis is laid upon the exploration of many-body and two-body effects entering into the nuclear dynamics. Recent results are presented for two-body effects due to the mesonic seagull amplitude and due to the excitation of nucleon internal degrees of freedom accompanied by meson exchanges. Due to these studies the in-medium electromagnetic polarizabilities are by now well understood, whereas the understanding of nuclear Compton scattering in the Δ-resonance range is only at the beginning. Furthermore, phenomenological methods how to include retardation effects in the
Inelastic Light Scattering Processes
NASA Technical Reports Server (NTRS)
Fouche, Daniel G.; Chang, Richard K.
1973-01-01
Five different inelastic light scattering processes will be denoted by, ordinary Raman scattering (ORS), resonance Raman scattering (RRS), off-resonance fluorescence (ORF), resonance fluorescence (RF), and broad fluorescence (BF). A distinction between fluorescence (including ORF and RF) and Raman scattering (including ORS and RRS) will be made in terms of the number of intermediate molecular states which contribute significantly to the scattered amplitude, and not in terms of excited state lifetimes or virtual versus real processes. The theory of these processes will be reviewed, including the effects of pressure, laser wavelength, and laser spectral distribution on the scattered intensity. The application of these processes to the remote sensing of atmospheric pollutants will be discussed briefly. It will be pointe