Grogan, Brandon Robert
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Grogan, Brandon R
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Lobato, I; Van Dyck, D
2015-08-01
The steadily improving experimental possibilities in instrumental resolution as in sensitivity and quantization of the data recording put increasingly higher demands on the precision of the scattering factors, which are the key ingredients for electron diffraction or high-resolution imaging simulation. In the present study, we will systematically investigate the accuracy of fitting of the main parameterizations of the electron scattering factor for the calculation of electron diffraction intensities. It is shown that the main parameterizations of the electron scattering factor are consistent to calculate electron diffraction intensities for thin specimens and low angle scattering. Parameterizations of the electron scattering factor with the correct asymptotic behavior (Lobato and Dyck [5], Kirkland [4], and Weickenmeier and Kohl [2]) produce similar results for both the undisplaced lattice model and the frozen phonon model, except for certain thicknesses and reflections.
Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.
1992-01-01
Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.
Parameterization of the scattering and absorption properties of individual ice crystals
Yang, Ping; Liou, K. N.; Wyser, Klaus; Mitchell, David
2000-02-27
We present parameterizations of the single-scattering properties for individual ice crystals of various habits based on the results computed from the accurate light scattering calculations. The projected area, volume, and single-scattering properties of ice crystals with various shapes and sizes are computed for 56 narrow spectral bands covering 0.2-5 {mu}m. The ice crystal habits considered in this study are hexagonal plates, solid and hollow columns, planar and spatial bullet rosette, and aggregates that are commonly observed in cirrus clouds. Using the observational relationships between the aspect ratios and the sizes of ice crystals, we can define the three-dimensional structure of these ice crystal habits with respect to their maximum dimensions for light scattering calculations. The volume and projected area of ice crystals, expressed in terms of the diameters of the corresponding equivalent spheres, are first parameterized by employing the ice crystal maximum dimensions. Further, various analytical expressions as functions of the effective dimensions of ice crystals have been developed to parameterize the extinction and absorption efficiencies, asymmetry factor, and the truncation of the forward peak energy in the phase function. The present parameterization scheme provides an efficient approach to obtain the basic scattering and absorption properties of nonspherical ice crystals. (c) 2000 American Geophysical Union.
NASA Astrophysics Data System (ADS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; Jimenez, Jose L.; Kondo, Yutaka; Sahu, Lokesh K.; Dibb, Jack E.; Wang, Chien
2016-07-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction
Mang, J.T.; Hjelm, R.P.; Skidmore, C.B.; Howe, P.M.
1996-07-01
High explosive materials used in the nuclear stockpile are composites of crystalline high explosives (HE) with binder materials, such as Estane. In such materials, there are naturally occurring density fluctuations (defects) due to cracks, internal (in the HE) and external (in the binder) voids and other artifacts of preparation. Changes in such defects due to material aging can affect the response of explosives due to shock, impact and thermal loading. Modeling efforts are attempting to provide quantitative descriptions of explosive response from the lowest ignition thresholds to the development of full blown detonations and explosions, however, adequate descriptions of these processes require accurate measurements of a number of structural parameters of the HE composite. Since different defects are believed to affect explosive sensitivity in different ways it is necessary to quantitatively differentiate between defect types. The authors report here preliminary results of SANS measurements on surrogates for HE materials. The objective of these measurements was to develop methodologies using SANS techniques to parameterize internal void size distributions in a surrogate material, sugar, to simulate an HE used in the stockpile, HMX. Sugar is a natural choice as a surrogate material, as it has the same crystal structure, has similar intragranular voids and has similar mechanical properties as HMX. It is used extensively as a mock material for explosives. Samples were used with two void size distributions: one with a sufficiently small mean particle size that only small occluded voids are present in significant concentrations, and one where the void sizes could be larger. By using methods in small-angle neutron scattering, they were able to isolate the scattering arising from particle-liquid interfaces and internal voids.
Laser scattering measurement for laser removal of graffiti
NASA Astrophysics Data System (ADS)
Tearasongsawat, Watcharawee; Kittiboonanan, Phumipat; Luengviriya, Chaiya; Ratanavis, Amarin
2015-07-01
In this contribution, a technical development of the laser scattering measurement for laser removal of graffiti is reported. This study concentrates on the removal of graffiti from metal surfaces. Four colored graffiti paints were applied to stainless steel samples. Cleaning efficiency was evaluated by the laser scattering system. In this study, an angular laser removal of graffiti was attempted to examine the removal process under practical conditions. A Q-switched Nd:YAG laser operating at 1.06 microns with the repetition rate of 1 Hz was used to remove graffiti from stainless steel samples. The laser fluence was investigated from 0.1 J/cm2 to 7 J/cm2. The laser parameters to achieve the removal effectiveness were determined by using the laser scattering system. This study strongly leads to further development of the potential online surface inspection for the removal of graffiti.
NASA Astrophysics Data System (ADS)
Räisänen, Petri
1999-02-01
The parameterization of cloud shortwave absorption poses a difficult problem in broadband radiation schemes that treat the near-IR region as a single interval. This problem arises because the spectral variation of the single-scattering co-albedo 1 of cloud droplets and ice crystals is enormous in the near-IR region, and because the cloud particle absorption is overlapped by sharply varying water vapor absorption. In this paper, several parameterization methods of cloud near-IR (0.68-4.00 m) 1 are intercompared using a large set of atmospheric columns generated by a GCM. The methods include 1) linear averaging of 1 , weighting with the TOA solar flux; 2) `thick averaging' by Edwards and Slingo; 3) Fouquart's formula, which presents water cloud near-IR 1 as a function of optical thickness; and 4) the `correlated ' technique by Espinoza and Harshvardhan. An extension of the correlated technique to ice clouds is suggested. In addition, a new `adaptive ' broadband parameterization technique is developed and tested. In this method, the near-IR 1 of a cloud layer is parameterized in terms of the cloud properties (phase, optical thickness, and effective particle size) and the properties of the overlying atmosphere (slant vapor path and clouds). Two slightly different versions of the method are considered.The results of the intercomparison indicate that the adaptive method yields higher accuracy than the other broadband techniques tested. Linear averaging is by far the least accurate method; in particular, it is shown that linear averaging of near-IR 1 can lead to substantially overestimated absorption in ice clouds also. However, when the near-IR region is subdivided into three bands, the combination of thick averaging for water clouds and linear averaging for ice clouds provides results superior to those of all the broadband methods.
Scattering removal for finger-vein image restoration.
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy.
NASA Astrophysics Data System (ADS)
Pokhrel, Rudra P.; Wagner, Nick L.; Langridge, Justin M.; Lack, Daniel A.; Jayarathne, Thilina; Stone, Elizabeth A.; Stockwell, Chelsea E.; Yokelson, Robert J.; Murphy, Shane M.
2016-08-01
Single-scattering albedo (SSA) and absorption Ångström exponent (AAE) are two critical parameters in determining the impact of absorbing aerosol on the Earth's radiative balance. Aerosol emitted by biomass burning represent a significant fraction of absorbing aerosol globally, but it remains difficult to accurately predict SSA and AAE for biomass burning aerosol. Black carbon (BC), brown carbon (BrC), and non-absorbing coatings all make substantial contributions to the absorption coefficient of biomass burning aerosol. SSA and AAE cannot be directly predicted based on fuel type because they depend strongly on burn conditions. It has been suggested that SSA can be effectively parameterized via the modified combustion efficiency (MCE) of a biomass burning event and that this would be useful because emission factors for CO and CO2, from which MCE can be calculated, are available for a large number of fuels. Here we demonstrate, with data from the FLAME-4 experiment, that for a wide variety of globally relevant biomass fuels, over a range of combustion conditions, parameterizations of SSA and AAE based on the elemental carbon (EC) to organic carbon (OC) mass ratio are quantitatively superior to parameterizations based on MCE. We show that the EC / OC ratio and the ratio of EC / (EC + OC) both have significantly better correlations with SSA than MCE. Furthermore, the relationship of EC / (EC + OC) with SSA is linear. These improved parameterizations are significant because, similar to MCE, emission factors for EC (or black carbon) and OC are available for a wide range of biomass fuels. Fitting SSA with MCE yields correlation coefficients (Pearson's r) of ˜ 0.65 at the visible wavelengths of 405, 532, and 660 nm while fitting SSA with EC / OC or EC / (EC + OC) yields a Pearson's r of 0.94-0.97 at these same wavelengths. The strong correlation coefficient at 405 nm (r = 0.97) suggests that parameterizations based on EC / OC or EC / (EC + OC) have good predictive
NASA Astrophysics Data System (ADS)
Yu, Ting; Chaix, Jean-François; Komatitsch, Dimitri; Garnier, Vincent; Audibert, Lorenzo; Henault, Jean-Marie
2017-02-01
Multiple scattering is important when ultrasounds propagate in a heterogeneous medium such as concrete, the scatterer size of which is in the order of the wavelength. The aim of this work is to build a 2D numerical model of ultrasonic wave propagation integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering could be obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. After the creation of numerical model under several assumptions, its validation is completed in a case of scattering by one cylinder through the comparison with analytical solution. Two cases of multiple scattering by a set of cylinders at different concentrations are simulated to perform a parametric study (of frequency, scatterer concentration, scatterer size). The effective properties are compared with the predictions of Waterman-Truell model as well, to verify its validity.
Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.
Wang, Bo; Yang, Xiaolan
2015-04-01
The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study.
Yoon, Yongsu; Morishita, Junji; Park, MinSeok; Kim, Hyunji; Kim, Kihyun; Kim, Jungmin
2016-01-01
The purpose of this study is to investigate the feasibility of a novel indirect flat panel detector (FPD) system for removing scatter radiation. The substrate layer of our FPD system has a Pb net-like structure that matches the ineffective area and blocks the scatter radiation such that only primary X-rays reach the effective area on a thin-film transistor. To evaluate the performance of the proposed system, we used Monte Carlo simulations to derive the scatter fraction and contrast. The scatter fraction of the proposed system is lower than that of a parallel grid system, and the contrast is superior to that of a system without a grid. If the structure of the proposed FPD system is optimized with respect to the specifications of a specific detector, the purpose of the examination, and the energy range used, the FPD can be useful in diagnostic radiology.
NASA Astrophysics Data System (ADS)
Rana, R.; Jain, A.; Shankar, A.; Bednarek, D. R.; Rudin, S.
2016-03-01
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 x 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, antiscatter grid artifacts can be corrected, even during dynamic sequences.
Radiation properties and emissivity parameterization of high level thin clouds
NASA Technical Reports Server (NTRS)
Wu, M.-L. C.
1984-01-01
To parameterize emissivity of clouds at 11 microns, a study has been made in an effort to understand the radiation field of thin clouds. The contributions to the intensity and flux from different sources and through different physical processes are calculated by using the method of successive orders of scattering. The effective emissivity of thin clouds is decomposed into the effective absorption emissivity, effective scattering emissivity, and effective reflection emissivity. The effective absorption emissivity depends on the absorption and emission of the cloud; it is parameterized in terms of optical thickness. The effective scattering emissivity depends on the scattering properties of the cloud; it is parameterized in terms of optical thickness and single scattering albedo. The effective reflection emissivity follows the similarity relation as in the near infrared cases. This is parameterized in terms of the similarity parameter and optical thickness, as well as the temperature difference between the cloud and ground.
Thomas, P J; Midgley, P A
2001-08-01
The increased spectral information obtained by acquiring an EFTEM image-series over several hundred eV allows plural scattering to be removed from loss images using standard deconvolution techniques developed for the quantification of EEL spectra. In this work, both Fourier-log and Fourier-ratio deconvolution techniques have been applied successfully to such image-series. Application of the Fourier-log technique over an energy-loss range of several hundred eV has been achieved by implementation of a novel method that extends the effective dynamic range of EFTEM image-series acquisition by over four orders of magnitude. Experimental results show that the removal of plural scattering from EFTEM image-series gives a significant improvement in quantification for thicker specimen regions. Further, the recovery of the single-scattering distribution using the Fourier-log technique over an extended energy-loss range is shown to result in an increase in both the ionisation-edge jump-ratio and the signal-to-noise ratio.
Parameterizing turbulence over abrupt topography
NASA Astrophysics Data System (ADS)
Klymak, Jody
2016-11-01
Stratified flow over abrupt topography generates a spectrum of propagating internal waves at large scales, and non-linear overturning breaking waves at small scales. For oscillating flows, the large scale waves propagate away as internal tides, for steady flows the large-scale waves propagate away as standing "columnar modes". At small-scales, the breaking waves appear to be similar for either oscillating or steady flows, so long as in the oscillating case the topography is significantly steeper than the internal tide angle of propagation. The size and energy lost to the breaking waves can be predicted relatively well from assuming that internal modes that propagate horizontally more slowly than the barotropic internal tide speed are arrested and their energy goes to turbulence. This leads to a recipe for dissipation of internal tides at abrupt topography that is quite robust for both the local internal tide generation problem (barotropic forcing) and for the scattering problem (internal tides incident on abrupt topography). Limitations arise when linear generation models break down, an example of which is interference between two ridges. A single "super-critical" ridge is well-modeled by a single knife-edge topography, regardless of its actual shape, but two supercritical ridges in close proximity demonstrate interference of the high modes that makes knife-edfe approximations invalid. Future direction of this research will be to use more complicated linear models to estimate the local dissipation. Of course, despite the large local dissipation, many ridges radiate most of their energy into the deep ocean, so tracking this low-mode radiated energy is very important, particularly as it means dissipation parameterizations in the open ocean due to these sinks from the surface tide cannot be parameterized locally to where they are lost from the surface tide, but instead lead to non-local parameterizations. US Office of Naval Research; Canadian National Science and
Satellite-Based Model Parameterization of Diabatic Heating
NASA Technical Reports Server (NTRS)
Pielke, Roger, Sr.; Stokowski, David; Wang, Jih-Wang; Vukicevic, Tomislava; Leoncini, Giovanni; Matsui, Toshihisa; Castro, Christopher L.; Niyogi, Dev; Kishtawal, Chandra M.; Biazar, Arastoo; Doty, Kevin; McNider, Richard T.; Nair, Udaysankar; Tao, Wei-Kuo
2007-01-01
Future meteorological satellites are expected to provide much needed fine-scale information that can improve the accuracy of weather and climate models. As one application of this improved capability, we introduce the concept of a generalized parameterization framework using satellite datasets that will increase the accuracy and the computational efficiency of weather and climate modeling. In an atmospheric model, several different parameterizations usually are used to reproduce the various physical processes. However, it is generally unrealistic to separate the processes in this artificial way since the observations and physics make no such artificial separation. Thus, we propose a new unified parameterization framework to remove the unrealistic separation between parameterizations.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Yoon, Y; Park, M; Kim, H; Kim, K; Kim, J; Morishita, J
2015-06-15
Purpose: This study aims to identify the feasibility of a novel cesium-iodine (CsI)-based flat-panel detector (FPD) for removing scatter radiation in diagnostic radiology. Methods: The indirect FPD comprises three layers: a substrate, scintillation, and thin-film-transistor (TFT) layer. The TFT layer has a matrix structure with pixels. There are ineffective dimensions on the TFT layer, such as the voltage and data lines; therefore, we devised a new FPD system having net-like lead in the substrate layer, matching the ineffective area, to block the scatter radiation so that only primary X-rays could reach the effective dimension.To evaluate the performance of this new FPD system, we conducted a Monte Carlo simulation using MCNPX 2.6.0 software. Scatter fractions (SFs) were acquired using no grid, a parallel grid (8:1 grid ratio), and the new system, and the performances were compared.Two systems having different thicknesses of lead in the substrate layer—10 and 20μm—were simulated. Additionally, we examined the effects of different pixel sizes (153×153 and 163×163μm) on the image quality, while keeping the effective area of pixels constant (143×143μm). Results: In case of 10μm lead, the SFs of the new system (∼11%) were lower than those of the other system (∼27% with no grid, ∼16% with parallel grid) at 40kV. However, as the tube voltage increased, the SF of new system (∼19%) was higher than that of parallel grid (∼18%) at 120kV. In the case of 20μm lead, the SFs of the new system were lower than those of the other systems at all ranges of the tube voltage (40–120kV). Conclusion: The novel CsI-based FPD system for removing scatter radiation is feasible for improving the image contrast but must be optimized with respect to the lead thickness, considering the system’s purposes and the ranges of the tube voltage in diagnostic radiology. This study was supported by a grant(K1422651) from Institute of Health Science, Korea University.
Advanced Surface Flux Parameterization
2001-09-30
within PE 0602435N are BE-35-2-18, for the Mesoscale Modeling of the Atmos- phere and Aerosols, BE-35-2-19, and for the Exploratory Data Assimilation ... Methods . Related project at NPS is N0001401WR20242 for Evaluating Surface Flux and Boundary Layer Parameterizations in Mesoscale Models Using
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Dynamic Parameterization of IPSEC
2001-12-01
EXPECTED BENFITS OF THE RESEARCH ............................................2 D. RESEARCH OBJECTIVES...182 3. Explore Proposal Caching Issues ...................................................182 4. Security Policy Editor...C. EXPECTED BENFITS OF THE RESEARCH By providing dynamic parameterization to IPsec, government and military 3 security systems will be able to
Lightweight Parameterized Suffix Array Construction
NASA Astrophysics Data System (ADS)
Tomohiro, I.; Deguchi, Satoshi; Bannai, Hideo; Inenaga, Shunsuke; Takeda, Masayuki
We present a first algorithm for direct construction of parameterized suffix arrays and parameterized longest common prefix arrays for non-binary strings. Experimental results show that our algorithm is much faster than naïve methods.
Determinants of seed removal distance by scatter-hoarding rodents in deciduous forests.
Moore, Jeffrey E; McEuen, Amy B; Swihart, Robert K; Contreras, Thomas A; Steele, Michael A
2007-10-01
Scatter-hoarding rodents should space food caches to maximize cache recovery rate (to minimize loss to pilferers) relative to the energetic cost of carrying food items greater distances. Optimization models of cache spacing make two predictions. First, spacing of caches should be greater for food items with greater energy content. Second, the mean distance between caches should increase with food abundance. However, the latter prediction fails to account for the effect of food abundance on the behavior of potential pilferers or on the ability of caching individuals to acquire food by means other than recovering their own caches. When considering these factors, shorter cache distances may be predicted in conditions of higher food abundance. We predicted that seed caching distances would be greater for food items of higher energy content and during lower ambient food abundance and that the effect of seed type on cache distance variation would be lower during higher food abundance. We recorded distances moved for 8636 seeds of five seed types at 15 locations in three forested sites in Pennsylvania, USA, and 29 forest fragments in Indiana, U.S.A., across five different years. Seed production was poor in three years and high in two years. Consistent with previous studies, seeds with greater energy content were moved farther than less profitable food items. Seeds were dispersed less far in seed-rich years than in seed-poor years, contrary to predictions of conventional models. Interactions were important, with seed type effects more evident in seed-poor years. These results suggest that, when food is superabundant, optimal cache distances are more strongly determined by minimizing energy cost of caching than by minimizing pilfering rates and that cache loss rates may be more strongly density-dependent in times of low seed abundance.
[Characteristics and Parameterization for Atmospheric Extinction Coefficient in Beijing].
Chen, Yi-na; Zhao, Pu-sheng; He, Di; Dong, Fan; Zhao, Xiu-juan; Zhang, Xiao-ling
2015-10-01
In order to study the characteristics of atmospheric extinction coefficient in Beijing, systematic measurements had been carried out for atmospheric visibility, PM2.5 concentration, scattering coefficient, black carbon, reactive gases, and meteorological parameters from 2013 to 2014. Based on these data, we compared some published fitting schemes of aerosol light scattering enhancement factor [ f(RH)], and discussed the characteristics and the key influence factors for atmospheric extinction coefficient. Then a set of parameterization models of atmospheric extinction coefficient for different seasons and different polluted levels had been established. The results showed that aerosol scattering accounted for more than 94% of total light extinction. In the summer and autumn, the aerosol hygroscopic growth caused by high relative humidity had increased the aerosol scattering coefficient by 70 to 80 percent. The parameterization models could reflect the influencing mechanism of aerosol and relative humidity upon ambient light extinction, and describe the seasonal variations of aerosol light extinction ability.
Parameterization of solar cells
NASA Technical Reports Server (NTRS)
Appelbaum, J.; Chait, A.; Thompson, D.
1992-01-01
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. An alternative approach for cell performance prediction and cell screening is provided by modeling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters, or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening of solar cells for arrays. We show that common curve fitting techniques, e.g., least squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve fitting criteria alone cannot be directly used for cell parameterization. We propose a consistent procedure which takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. The procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.
1989-01-01
The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
Summary of Cumulus Parameterization Workshop
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh
2002-01-01
A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.
Parameterization of photon beam dosimetry for a linear accelerator
Lebron, Sharon; Barraclough, Brendan; Lu, Bo; Yan, Guanghua; Kahler, Darren; Li, Jonathan G.; Liu, Chihray
2016-02-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, including percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup and the
NASA Astrophysics Data System (ADS)
Roh, Y. H.; Yoon, Y.; Kim, K.; Kim, J.; Kim, J.; Morishita, J.
2016-10-01
Scattered radiation is the main reason for the degradation of image quality and the increased patient exposure dose in diagnostic radiology. In an effort to reduce scattered radiation, a novel structure of an indirect flat panel detector has been proposed. In this study, a performance evaluation of the novel system in terms of image contrast as well as an estimation of the number of photons incident on the detector and the grid exposure factor were conducted using Monte Carlo simulations. The image contrast of the proposed system was superior to that of the no-grid system but slightly inferior to that of the parallel-grid system. The number of photons incident on the detector and the grid exposure factor of the novel system were higher than those of the parallel-grid system but lower than those of the no-grid system. The proposed system exhibited the potential for reduced exposure dose without image quality degradation; additionally, can be further improved by a structural optimization considering the manufacturer's specifications of its lead contents.
Haag's Theorem and Parameterized Quantum Field Theory
NASA Astrophysics Data System (ADS)
Seidewitz, Edwin
2017-01-01
``Haag's theorem is very inconvenient; it means that the interaction picture exists only if there is no interaction''. In traditional quantum field theory (QFT), Haag's theorem states that any field unitarily equivalent to a free field must itself be a free field. But the derivation of the Dyson series perturbation expansion relies on the use of the interaction picture, in which the interacting field is unitarily equivalent to the free field, but which must still account for interactions. So, the usual derivation of the scattering matrix in QFT is mathematically ill defined. Nevertheless, perturbative QFT is currently the only practical approach for addressing realistic scattering, and it has been very successful in making empirical predictions. This success can be understood through an alternative derivation of the Dyson series in a covariant formulation of QFT using an invariant, fifth path parameter in addition to the usual four position parameters. The parameterization provides an additional degree of freedom that allows Haag's Theorem to be avoided, permitting the consistent use of a form of interaction picture in deriving the Dyson expansion. The extra symmetry so introduced is then broken by the choice of an interacting vacuum.
Bayesian Inversion of Seabed Scattering Data
2014-09-30
Bayesian Inversion of Seabed Scattering Data (Special Research Award in Ocean Acoustics ) Gavin A.M.W. Steininger School of Earth & Ocean...Figure 1: Schematic diagram of the environmental parameterizations for the monostatic- scattering kernel and reflection- coefficient forward and inverse...frequencies. Left two columns: scattering data; right two columns: reflection- coefficient data. 3 layers, hence accounting for the uncertainty of
Parameterized post-Newtonian cosmology
NASA Astrophysics Data System (ADS)
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Buck, Warren W.; Maung, Khin M.
1989-01-01
Two kinds of number density distributions of the nucleus, harmonic well and Woods-Saxon models, are used with the t-matrix that is taken from the scattering experiments to find a simple optical potential. The parameterized two body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to imaginary part of the forward elastic scattering amplitude, are shown. The eikonal approximation was chosen as the solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
NASA Astrophysics Data System (ADS)
Smith, Helen R.; Baran, Anthony J.; Hesse, Evelyn; Hill, Peter G.; Connolly, Paul J.; Webb, Ann
2016-11-01
A single habit parameterization for the shortwave optical properties of cirrus is presented. The parameterization utilizes a hollow particle geometry, with stepped internal cavities as identified in laboratory and field studies. This particular habit was chosen as both experimental and theoretical results show that the particle exhibits lower asymmetry parameters when compared to solid crystals of the same aspect ratio. The aspect ratio of the particle was varied as a function of maximum dimension, D, in order to adhere to the same physical relationships assumed in the microphysical scheme in a configuration of the Met Office atmosphere-only global model, concerning particle mass, size and effective density. Single scattering properties were then computed using T-Matrix, Ray Tracing with Diffraction on Facets (RTDF) and Ray Tracing (RT) for small, medium, and large size parameters respectively. The scattering properties were integrated over 28 particle size distributions as used in the microphysical scheme. The fits were then parameterized as simple functions of Ice Water Content (IWC) for 6 shortwave bands. The parameterization was implemented into the GA6 configuration of the Met Office Unified Model along with the current operational long-wave parameterization. The GA6 configuration is used to simulate the annual twenty-year short-wave (SW) fluxes at top-of-atmosphere (TOA) and also the temperature and humidity structure of the atmosphere. The parameterization presented here is compared against the current operational model and a more recent habit mixture model.
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Infrared radiation parameterizations in numerical climate models
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Kratz, David P.; Ridgway, William
1991-01-01
This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.
Parameterization of solar flare dose
Lamarche, A.H.; Poston, J.W.
1996-12-31
A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP).
Control of Shortwave Radiation Parameterization on Tropical Climate Simulation
NASA Astrophysics Data System (ADS)
Crétat, J.; Masson, S. G.; Berthet, S.; Samson, G.; Terray, P.; Dudhia, J.; Pinsard, F.; Hourdin, C.
2015-12-01
SST-forced tropical-channel simulations are used to quantify the control of shortwave (SW) parameterization on the mean tropical climate compared to other major model settings (convection, boundary layer turbulence, vertical and horizontal resolutions). The physical mechanisms whereby this control manifests are explored by the means of a large set of simulations with two widely used SW schemes. Analyses focus on the spatial distribution and magnitude of the net SW radiation budget at the surface (SWnet_SFC), latent heat fluxes, and rainfall at the annual timescale. The model skill and sensitivity to the settings tested are quantified relative to observations and reanalyses and using an ensemble approach. Model skill is mainly controlled by SW parameterization, especially the magnitude of SWnet_SFC and rainfall and both the spatial distribution and magnitude of latent heat fluxes over ocean. On the other hand, the spatial distribution of continental rainfall (SWnet_SFC) is mainly influenced by convection parameterization and horizontal resolution (boundary layer parameterization and orography). Physical understanding of both the control of SW parameterization and sensitivity to SW schemes is addressed by analyzing the thermal structure of the atmosphere and conducting sensitivity experiments to O3 absorption and SW scattering coefficient. SW parameterization shapes the stability of the atmosphere in two different ways according to whether surface is coupled to atmosphere or not, while O3 absorption has minor effects in our simulations. Over SST-prescribed regions, increasing the amount of SW absorption warms the atmosphere only because surface temperatures are fixed, resulting in increased atmospheric stability. Over surface-atmosphere coupled regions (i.e., land points in our simulations), increasing SW absorption warms both atmospheric and surface temperatures, leading to a shift towards a warmer state and a more intense hydrological cycle. This turns in reversal
Visibility Parameterization For Forecasting Model Applications
NASA Astrophysics Data System (ADS)
Gultepe, I.; Milbrandt, J.; Binbin, Z.
2010-07-01
In this study, the visibility parameterizations developed during Fog Remote Sensing And Modeling (FRAM) projects, conducted in central and eastern Canada, will be summarized and their use for forecasting/nowcasting applications will be discussed. Parameterizations developed for reductions in visibility due to 1) fog, 2) rain, 3) snow, and 4) relative humidity (RH) during FRAM will be given and uncertainties in the parameterizations will be discussed. Comparisons made between Canadian GEM NWP model (with 1 and 2.5 km horizontal grid spacing) and observations collected during the Science of Nowcasting Winter Weather for Vancouver 2010 (SNOW-V10) project and FRAM projects, using the new parameterizations, will be given Observations used in this study were obtained using a fog measuring device (FMD) for fog parameterization, a Vaisala all weather precipitation sensor called FD12P for rain and snow parameterizations and visibility measurements, and a total precipitation sensor (TPS), and distrometers called OTT ParSiVel and Laser Precipitation Measurement (LPM) for rain/snow particle spectra. The results from the three SNOW-V10 sites suggested that visibility values given by the GEM model using the new parameterizations were comparable with observed visibility values when model based input parameters such as liquid water content, RH, and precipitation rate for visibility parameterizations were predicted accurately.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-12-31
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-01-01
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
A Two-Habit Ice Cloud Optical Property Parameterization for GCM Application
NASA Technical Reports Server (NTRS)
Yi, Bingqi; Yang, Ping; Minnis, Patrick; Loeb, Norman; Kato, Seiji
2014-01-01
We present a novel ice cloud optical property parameterization based on a two-habit ice cloud model that has been proved to be optimal for remote sensing applications. The two-habit ice model is developed with state-of-the-art numerical methods for light scattering property calculations involving individual columns and column aggregates with the habit fractions constrained by in-situ measurements from various field campaigns. Band-averaged bulk ice cloud optical properties including the single-scattering albedo, the mass extinction/absorption coefficients, and the asymmetry factor are parameterized as functions of the effective particle diameter for the spectral bands involved in the broadband radiative transfer models. Compared with other parameterization schemes, the two-habit scheme generally has lower asymmetry factor values (around 0.75 at the visible wavelengths). The two-habit parameterization scheme was widely tested with the broadband radiative transfer models (i.e. Rapid Radiative Transfer Model, GCM version) and global circulation models (GCMs, i.e. Community Atmosphere Model, version 5). Global ice cloud radiative effects at the top of the atmosphere are also analyzed from the GCM simulation using the two-habit parameterization scheme in comparison with CERES satellite observations.
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.
Conformal Surface Parameterization for Texture Mapping
1999-03-25
Conformal Surface Parameterization for Texture Mapping Steven Haker Department of Electrical and Computer Engineering University of Minnesota...also like to thank Professor Victoria Interrante for some very helpful conversations on texture mappings. References [1] S. Angenent, S. Haker , A
Brain surface conformal parameterization with algebraic functions.
Wang, Yalin; Gu, Xianfeng; Chan, Tony F; Thompson, Paul M; Yau, Shing-Tung
2006-01-01
In medical imaging, parameterized 3D surface models are of great interest for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on algebraic functions. By solving the Yamabe equation with the Ricci flow method, we can conformally map a brain surface to a multi-hole disk. The resulting parameterizations do not have any singularities and are intrinsic and stable. To illustrate the technique, we computed parameterizations of several types of anatomical surfaces in MRI scans of the brain, including the hippocampi and the cerebral cortices with various landmark curves labeled. For the cerebral cortical surfaces, we show the parameterization results are consistent with selected landmark curves and can be matched to each other using constrained harmonic maps. Unlike previous planar conformal parameterization methods, our algorithm does not introduce any singularity points. It also offers a method to explicitly match landmark curves between anatomical surfaces such as the cortex, and to compute conformal invariants for statistical comparisons of anatomy.
Multiple parameterization for hydraulic conductivity identification.
Tsai, Frank T-C; Li, Xiaobao
2008-01-01
Hydraulic conductivity identification remains a challenging inverse problem in ground water modeling because of the inherent nonuniqueness and lack of flexibility in parameterization methods. This study introduces maximum weighted log-likelihood estimation (MWLLE) along with multiple generalized parameterization (GP) methods to identify hydraulic conductivity and to address nonuniqueness and inflexibility problems in parameterization. A scaling factor for information criteria is suggested to obtain reasonable weights of parameterization methods for the MWLLE and model averaging method. The scaling factor is a statistical parameter relating to a desired significance level in Occam's window and the variance of the chi-squares distribution of the fitting error. Through model averaging with multiple GP methods, the conditional estimate of hydraulic conductivity and its total conditional covariances are calculated. A numerical example illustrates the issue arising from Occam's window in estimating model weights and shows the usefulness of the scaling factor to obtain reasonable model weights. Moreover, the numerical example demonstrates the advantage of using multiple GP methods over the zonation and interpolation methods because GP provides better models in the model averaging method. The methodology is applied to the Alamitos Gap area, California, to identify the hydraulic conductivity field. The results show that the use of the scaling factor is necessary in order to incorporate good parameterization methods and to avoid a dominant parameterization method.
The parameterization of microchannel-plate-based detection systems
NASA Astrophysics Data System (ADS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.
2016-10-01
The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.
Avoiding Haag's Theorem with Parameterized Quantum Field Theory
NASA Astrophysics Data System (ADS)
Seidewitz, Ed
2017-03-01
Under the normal assumptions of quantum field theory, Haag's theorem states that any field unitarily equivalent to a free field must itself be a free field. Unfortunately, the derivation of the Dyson series perturbation expansion relies on the use of the interaction picture, in which the interacting field is unitarily equivalent to the free field but must still account for interactions. Thus, the traditional perturbative derivation of the scattering matrix in quantum field theory is mathematically ill defined. Nevertheless, perturbative quantum field theory is currently the only practical approach for addressing scattering for realistic interactions, and it has been spectacularly successful in making empirical predictions. This paper explains this success by showing that Haag's Theorem can be avoided when quantum field theory is formulated using an invariant, fifth path parameter in addition to the usual four position parameters, such that the Dyson perturbation expansion for the scattering matrix can still be reproduced. As a result, the parameterized formalism provides a consistent foundation for the interpretation of quantum field theory as used in practice and, perhaps, for better dealing with other mathematical issues.
Parameterization of continental boundary layer clouds
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhao, Wei
2008-05-01
Large eddy simulations (LESs) of continental boundary layer clouds (BLCs) observed at the southern Great Plains (SGP) are used to study issues associated with the parameterization of sub-grid BLCs in large scale models. It is found that liquid water potential temperature θl and total specific humidity qt, which are often used as parameterization predictors in statistical cloud schemes, do not share the same probability distribution in the cloud layer with θl skewed to the left (negatively skewed) and qt skewed to the right (positively skewed). The skewness and kurtosis change substantially in time and space when the development of continental BLCs undergoes a distinct diurnal variation. The wide range of skewness and kurtosis of θl and qt can hardly be described by a single probability distribution function. To extend the application of the statistical cloud parameterization approach, this paper proposes an innovative cloud parameterization scheme that uses the boundary layer height and the lifting condensation level as the primary parameterization predictors. The LES results indicate that the probability distribution of these two quantities is relatively stable compared with that of θl and qt during the diurnal variation and nearly follows a Gaussian function. Verifications using LES output and the observations collected at the Atmospheric Radiation Measurement (ARM) Climate Research Facility (ARCF) SGP site indicate that the proposed scheme works well to represent continental BLCs.
Brain surface parameterization using Riemann surface structure.
Wang, Yalin; Gu, Xianfeng; Hayashi, Kiralee M; Chan, Tony F; Thompson, Paul M; Yau, Shing-Tung
2005-01-01
We develop a general approach that uses holomorphic 1-forms to parameterize anatomical surfaces with complex (possibly branching) topology. Rather than evolve the surface geometry to a plane or sphere, we instead use the fact that all orientable surfaces are Riemann surfaces and admit conformal structures, which induce special curvilinear coordinate systems on the surfaces. Based on Riemann surface structure, we can then canonically partition the surface into patches. Each of these patches can be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable. To illustrate the technique, we computed conformal structures for several types of anatomical surfaces in MRI scans of the brain, including the cortex, hippocampus, and lateral ventricles. We found that the resulting parameterizations were consistent across subjects, even for branching structures such as the ventricles, which are otherwise difficult to parameterize. Compared with other variational approaches based on surface inflation, our technique works on surfaces with arbitrary complexity while guaranteeing minimal distortion in the parameterization. It also offers a way to explicitly match landmark curves in anatomical surfaces such as the cortex, providing a surface-based framework to compare anatomy statistically and to generate grids on surfaces for PDE-based signal processing.
Optical closure of parameterized bio-optical relationships
NASA Astrophysics Data System (ADS)
He, Shuangyan; Fischer, Jürgen; Schaale, Michael; He, Ming-xia
2014-03-01
An optical closure study on bio-optical relationships was carried out using radiative transfer model matrix operator method developed by Freie Universität Berlin. As a case study, the optical closure of bio-optical relationships empirically parameterized with in situ data for the East China Sea was examined. Remote-sensing reflectance ( R rs) was computed from the inherent optical properties predicted by these biooptical relationships and compared with published in situ data. It was found that the simulated R rs was overestimated for turbid water. To achieve optical closure, bio-optical relationships for absorption and scattering coefficients for suspended particulate matter were adjusted. Furthermore, the results show that the Fournier and Forand phase functions obtained from the adjusted relationships perform better than the Petzold phase function. Therefore, before bio-optical relationships are used for a local sea area, the optical closure should be examined.
POET: Parameterized Optimization for Empirical Tuning
Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D
2007-01-29
The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.
A parameterization of cloud droplet nucleation
Ghan, S.J.; Chuang, C.C.; Penner, J.E.
1994-01-01
Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-interactions, the droplet nucleation process must be adequately represented. Ghan et al. have introduced a droplet nucleation parameterization for a single aerosol type that offers certain advantages over the popular Twomey parameterization. Here we describe the generalization of that parameterization to the case of multiple aerosol types, with estimation of aerosol mass as well as number activated.
Order-Sorted Parameterization and Induction
NASA Astrophysics Data System (ADS)
Meseguer, José
Parameterization is one of the most powerful features to make specifications and declarative programs modular and reusable, and our best hope for scaling up formal verification efforts. This paper studies order-sorted parameterization at three different levels: (i) its mathematical semantics; (ii) its operational semantics by term rewriting; and (iii) the inductive reasoning principles that can soundly be used to prove properties about such specifications. It shows that achieving the desired properties at each of these three levels is a considerably subtler matter than for many-sorted specifications, but that such properties can be attained under reasonable conditions.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode
An Infrared Radiative Transfer Parameterization For A Venus General Circulation Model
NASA Astrophysics Data System (ADS)
Eymet, Vincent; Fournier, R.; Lebonnois, S.; Bullock, M. A.; Dufresne, J.; Hourdin, F.
2006-09-01
A new 3-dimensional General Circulation Model (GCM) of Venus'atmosphere is curently under development at the Laboratoire de Meteorologie Dynamique, in the context of the Venus-Express mission. Special attention was devoted to the parameterization of infrared radiative transfer: this parameterization has to be both very fast and sufficiently accurate in order to provide valid results over extented periods of time. We have developped at the Laboratoire d'Energetique a Monte-Carlo code for computing reference radiative transfer results for optically thick inhomogeneous scattering planetary atmospheres over the IR spectrum. This code (named KARINE) is based on a Net-Exchange Rates formulation, and uses a k-distribution spectral model. The Venus spectral data, that was compiled at the Southwest Research Institute, accounts for gaseous absorption and scattering, typical clouds absorption and scattering, as well as CO2 and H2O absorption continuums. We will present the Net-Exchange Rates matrix that was computed using the Monte-Carlo approach. We will also show how this matrix has been used in order to produce a first-order radiative transfer parameterization that is used in the LMD Venus GCM. In addition, we will present how the proposed radiative transfer model was used in a simple convective-radiative equilibrium model in order to reproduce the main features of Venus' temperature profile.
Modified-Dewan Optical Turbulence Parameterizations
2007-11-02
Kea Observatories on the Island of Hawaii (Businger et al. 2002) by converting standard Numerical Weather Prediction (NWP) forecast model output into...describing optical turbulence. The Dewan parameterization is also being used to forecast optical seeing conditions for ground-based telescopes at the Mauna
Parameterization guidelines and considerations for hydrologic models
Technology Transfer Automated Retrieval System (TEKTRAN)
Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...
Parameterizing cloud condensation nuclei concentrations during HOPE
NASA Astrophysics Data System (ADS)
Hande, Luke B.; Engler, Christa; Hoose, Corinna; Tegen, Ina
2016-09-01
An aerosol model was used to simulate the generation and transport of aerosols over Germany during the HD(CP)2 Observational Prototype Experiment (HOPE) field campaign of 2013. The aerosol number concentrations and size distributions were evaluated against observations, which shows satisfactory agreement in the magnitude and temporal variability of the main aerosol contributors to cloud condensation nuclei (CCN) concentrations. From the modelled aerosol number concentrations, number concentrations of CCN were calculated as a function of vertical velocity using a comprehensive aerosol activation scheme which takes into account the influence of aerosol chemical and physical properties on CCN formation. There is a large amount of spatial variability in aerosol concentrations; however the resulting CCN concentrations vary significantly less over the domain. Temporal variability is large in both aerosols and CCN. A parameterization of the CCN number concentrations is developed for use in models. The technique involves defining a number of best fit functions to capture the dependence of CCN on vertical velocity at different pressure levels. In this way, aerosol chemical and physical properties as well as thermodynamic conditions are taken into account in the new CCN parameterization. A comparison between the parameterization and the CCN estimates from the model data shows excellent agreement. This parameterization may be used in other regions and time periods with a similar aerosol load; furthermore, the technique demonstrated here may be employed in regions dominated by different aerosol species.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Luchies, Adam C.; Ghoshal, Goutam; O’Brien, William D.; Oelze, Michael L.
2012-01-01
Quantitative ultrasound (QUS) techniques that parameterize the backscattered power spectrum have demonstrated significant promise for ultrasonic tissue characterization. Some QUS parameters, such as the effective scatterer diameter (ESD), require the assumption that the examined medium contains uniform diffuse scatterers. Structures that invalidate this assumption can significantly affect the estimated QUS parameters and decrease performance when classifying disease. In this work, a method was developed to reduce the effects of echoes that invalidate the assumption of diffuse scattering. To accomplish this task, backscattered signal sections containing non-diffuse echoes were identified and removed from the QUS analysis. Parameters estimated from the generalized spectrum (GS) and the Rayleigh SNR parameter were compared for detecting data blocks with non-diffuse echoes. Simulations and experiments were used to evaluate the effectiveness of the method. Experiments consisted of estimating QUS parameters from spontaneous fibroadenomas in rats and from beef liver samples. Results indicated that the method was able to significantly reduce or eliminate the effects of non-diffuse echoes that might exist in the backscattered signal. For example, the average reduction in the relative standard deviation of ESD estimates from simulation, rat fibroadenomas, and beef liver samples were 13%, 30%, and 51%, respectively. The Rayleigh SNR parameter performed best at detecting non-diffuse echoes for the purpose of removing and reducing ESD bias and variance. The method provides a means to improve the diagnostic capabilities of QUS techniques by allowing separate analysis of diffuse and non-diffuse scatterers. PMID:22622974
Control of shortwave radiation parameterization on tropical climate SST-forced simulation
NASA Astrophysics Data System (ADS)
Crétat, Julien; Masson, Sébastien; Berthet, Sarah; Samson, Guillaume; Terray, Pascal; Dudhia, Jimy; Pinsard, Françoise; Hourdin, Christophe
2016-09-01
SST-forced tropical-channel simulations are used to quantify the control of shortwave (SW) parameterization on the mean tropical climate compared to other major model settings (convection, boundary layer turbulence, vertical and horizontal resolutions), and to pinpoint the physical mechanisms whereby this control manifests. Analyses focus on the spatial distribution and magnitude of the net SW radiation budget at the surface (SWnet_SFC), latent heat fluxes, and rainfall at the annual timescale. The model skill and sensitivity to the tested settings are quantified relative to observations and using an ensemble approach. Persistent biases include overestimated SWnet_SFC and too intense hydrological cycle. However, model skill is mainly controlled by SW parameterization, especially the magnitude of SWnet_SFC and rainfall and both the spatial distribution and magnitude of latent heat fluxes over ocean. On the other hand, the spatial distribution of continental rainfall (SWnet_SFC) is mainly influenced by convection parameterization and horizontal resolution (boundary layer parameterization and orography). Physical understanding of the control of SW parameterization is addressed by analyzing the thermal structure of the atmosphere and conducting sensitivity experiments to O3 absorption and SW scattering coefficient. SW parameterization shapes the stability of the atmosphere in two different ways according to whether surface is coupled to atmosphere or not, while O3 absorption has minor effects in our simulations. Over SST-prescribed regions, increasing the amount of SW absorption warms the atmosphere only because surface temperatures are fixed, resulting in increased atmospheric stability. Over land-atmosphere coupled regions, increasing SW absorption warms both atmospheric and surface temperatures, leading to a shift towards a warmer state and a more intense hydrological cycle. This turns in reversal model behavior between land and sea points, with the SW scheme that
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
European upper mantle tomography: adaptively parameterized models
NASA Astrophysics Data System (ADS)
Schäfer, J.; Boschi, L.
2009-04-01
We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive
Turbulent Mixing Parameterizations for Oceanic Flows and Student Support
2014-09-30
projects is to formulate robust turbulence parameterizations that are applicable for a wide range of oceanic flow conditions. OBJECTIVES The...primary objectives of these projects are to bridge the gap between parameterizations/models for small-scale turbulent mixing developed from fundamental...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Turbulent Mixing Parameterizations for Oceanic Flows
Parameterization of cloud effects on the absorption of solar radiation
NASA Technical Reports Server (NTRS)
Davies, R.
1983-01-01
A radiation parameterization for the NASA Goddard climate model was developed, tested, and implemented. Interactive and off-hire experiments with the climate model to determine the limitations of the present parameterization scheme are summarized. The parameterization of Cloud absorption in terms of solar zeith angle, column water vapors about the cloud top, and cloud liquid water content is discussed.
Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models
2013-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Parameterization of Cumulus Convective Cloud Systems in...parameterization of cumulus convective clouds in mesoscale numerical weather prediction models OBJECTIVES Conduct detailed studies of cloud ...microphysical processes in order to develop a unified parameterization of boundary layer stratocumulus and trade wind cumulus convective clouds . Develop
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Invariant box-parameterization of neutrino oscillations
Weiler, Thomas J.; Wagner, DJ
1998-10-19
The model-independent 'box' parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing-matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n{>=}3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements.
Numerical Archetypal Parameterization for Mesoscale Convective Systems
NASA Astrophysics Data System (ADS)
Yano, J. I.
2015-12-01
Vertical shear tends to organize atmospheric moist convection into multiscale coherent structures. Especially, the counter-gradient vertical transport of horizontal momentum by organized convection can enhance the wind shear and transport kinetic energy upscale. However, this process is not represented by traditional parameterizations. The present paper sets the archetypal dynamical models, originally formulated by the second author, into a parameterization context by utilizing a nonhydrostatic anelastic model with segmentally-constant approximation (NAM-SCA). Using a two-dimensional framework as a starting point, NAM-SCA spontaneously generates propagating tropical squall-lines in a sheared environment. A high numerical efficiency is achieved through a novel compression methodology. The numerically-generated archetypes produce vertical profiles of convective momentum transport that are consistent with the analytic archetype.
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
Parameterizing surface wind speed over complex topography
NASA Astrophysics Data System (ADS)
Helbig, N.; Mott, R.; Herwijnen, A.; Winstral, A.; Jonas, T.
2017-01-01
Subgrid parameterizations are used in coarse-scale meteorological and land surface models to account for the impact of unresolved topography on wind speed. While various parameterizations have been suggested, these were generally validated on a limited number of measurements in specific geographical areas. We used high-resolution wind fields to investigate which terrain parameters most affect near-surface wind speed over complex topography under neutral conditions. Wind fields were simulated using the Advanced Regional Prediction System (ARPS) on Gaussian random fields as model topographies to cover a wide range of terrain characteristics. We computed coarse-scale wind speed, i.e., a spatial average over the large grid cell accounting for influence of unresolved topography, using a previously suggested subgrid parameterization for the sky view factor. We only require correlation length of subgrid topographic features and mean square slope in the coarse grid cell. Computed coarse-scale wind speed compared well with domain-averaged ARPS wind speed. To further statistically downscale coarse-scale wind speed, we use local, fine-scale topographic parameters, namely, the Laplacian of terrain elevations and mean square slope. Both parameters showed large correlations with fine-scale ARPS wind speed. Comparing downscaled numerical weather prediction wind speed with measurements from a large number of stations throughout Switzerland resulted in overall improved correlations and distribution statistics. Since we used a large number of model topographies to derive the subgrid parameterization and the downscaling framework, both are not scale dependent nor bound to a specific geographic region. Both can readily be implemented since they are based on easy to derive terrain parameters.
Aerosol water parameterization: a single parameter framework
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Abdelkader, M.; Klingmüller, K.; Xu, L.; Penner, J. E.; Fountoukis, C.; Nenes, A.; Lelieveld, J.
2015-11-01
We introduce a framework to efficiently parameterize the aerosol water uptake for mixtures of semi-volatile and non-volatile compounds, based on the coefficient, νi. This solute specific coefficient was introduced in Metzger et al. (2012) to accurately parameterize the single solution hygroscopic growth, considering the Kelvin effect - accounting for the water uptake of concentrated nanometer sized particles up to dilute solutions, i.e., from the compounds relative humidity of deliquescence (RHD) up to supersaturation (Köhler-theory). Here we extend the νi-parameterization from single to mixed solutions. We evaluate our framework at various levels of complexity, by considering the full gas-liquid-solid partitioning for a comprehensive comparison with reference calculations using the E-AIM, EQUISOLV II, ISORROPIA II models as well as textbook examples. We apply our parameterization in EQSAM4clim, the EQuilibrium Simplified Aerosol Model V4 for climate simulations, implemented in a box model and in the global chemistry-climate model EMAC. Our results show: (i) that the νi-approach enables to analytically solve the entire gas-liquid-solid partitioning and the mixed solution water uptake with sufficient accuracy, (ii) that, e.g., pure ammonium nitrate and mixed ammonium nitrate - ammonium sulfate mixtures can be solved with a simple method, and (iii) that the aerosol optical depth (AOD) simulations are in close agreement with remote sensing observations for the year 2005. Long-term evaluation of the EMAC results based on EQSAM4clim and ISORROPIA II will be presented separately.
Unified Parameterization of the Marine Boundary Layer
2010-09-30
information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2 . REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010...boundary layer closure for the convective boundary layer 2 . An EDMF approach to the vertical transport of TKE in convective boundary layers 3. EDMF in...4 implementation and extension to shallow cumulus parameterization is in progress. 2 An integrated TKE-based eddy-diffusivity/mass-flux
ERIC Educational Resources Information Center
Young, Andrew T.
1982-01-01
The correct usage of such terminology as "Rayleigh scattering,""Rayleigh lines,""Raman lines," and "Tyndall scattering" is resolved during an historical excursion through the physics of light-scattering by gas molecules. (Author/JN)
Fire parameterization on a global scale
NASA Astrophysics Data System (ADS)
Pechony, O.; Shindell, D. T.
2009-08-01
We present a convenient physically based global-scale fire parameterization algorithm for global climate models. We indicate environmental conditions favorable for fire occurrence based on calculation of the vapor pressure deficit as a function of location and time. Two ignition models are used. One assumes ubiquitous ignition, the other incorporates natural and anthropogenic sources, as well as anthropogenic fire suppression. Evaluation of the method using Global Precipitation Climatology Project precipitation, National Centers for Environmental Prediction/National Center for Atmospheric Research temperature and relative humidity, and Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf Area Index as a proxy for global vegetation density gives results in remarkable correspondence with global fire patterns observed from the MODIS and Visible and Infrared Scanner satellite instruments. The parameterized fires successfully reproduce the spatial distribution of global fires as well as the seasonal variability. The interannual variability of global fire activity derived from the 20-year advanced very high resolution radiometer record are well reproduced using Goddard Institute for Space Studies general circulation models climate simulations, as is the response to the climate changes following the eruptions of El Chichon and Mount Pinatubo. In conjunction with climate models and data sets on vegetation changes with time, the suggested fire parameterization offers the possibility to estimate relative variations of global fire activity for past and future climates.
Thermonuclear Reaction Rate Parameterization for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Sharp, Jacob; Kozub, Raymond L.; Smith, Michael S.; Scott, Jason; Lingerfelt, Eric
2004-10-01
The knowledge of thermonuclear reaction rates is vital to simulate novae, supernovae, X-ray bursts, and other astrophysical events. To facilitate dissemination of this knowledge, a set of tools has been created for managing reaction rates, located at www.nucastrodata.org. One tool is a rate parameterizer, which provides a parameterization for nuclear reaction rate vs. temperature values in the most widely used functional form. Currently, the parameterizer uses the Levenberg-Marquardt method (LMM), which requires an initial estimate of the best-fit parameters. The initial estimate is currently provided randomly from a preselected pool. To improve the quality of fits, a new, active method of selecting parameters has been developed. The parameters of each set in the pool are altered for a few iterations to replicate the input data as closely as possible. Then, the set which most nearly matches the input data (based on chi squared) is used in the LMM as the initial estimate for the final fitting procedure. A description of the new, active algorithm and its performance will be presented. Supported by the U. S. Department of Energy.
Implicit Shape Parameterization for Kansei Design Methodology
NASA Astrophysics Data System (ADS)
Nordgren, Andreas Kjell; Aoyama, Hideki
Implicit shape parameterization for Kansei design is a procedure that use 3D-models, or concepts, to span a shape space for surfaces in the automotive field. A low-dimensional, yet accurate shape descriptor was found by Principal Component Analysis of an ensemble of point-clouds, which were extracted from mesh-based surfaces modeled in a CAD-program. A theoretical background of the procedure is given along with step-by-step instructions for the required data-processing. The results show that complex surfaces can be described very efficiently, and encode design features by an implicit approach that does not rely on error-prone explicit parameterizations. This provides a very intuitive way to explore shapes for a designer, because various design features can simply be introduced by adding new concepts to the ensemble. Complex shapes have been difficult to analyze with Kansei methods due to the large number of parameters involved, but implicit parameterization of design features provides a low-dimensional shape descriptor for efficient data collection, model-building and analysis of emotional content in 3D-surfaces.
A Parameterization Invariant Approach to the Statistical Estimation of the CKM Phase alpha
Morris, Robin D.; Cohen-Tanugi, Johann; /SLAC
2008-04-14
In contrast to previous analyses, we demonstrate a Bayesian approach to the estimation of the CKM phase {alpha} that is invariant to parameterization. We also show that in addition to computing the marginal posterior in a Bayesian manner, the distribution must also be interpreted from a subjective Bayesian viewpoint. Doing so gives a very natural interpretation to the distribution. We also comment on the effect of removing information about {beta}{sup 00}.
A Physically Based Fractional Cloudiness Parameterization
1990-07-27
34 just beiow the PBL top. and designed for use as a PBL parameterization in a large-scale an infinitesimal "ventilation layer" just above the Earth s...34convective mass flux" concept the observations of Caughey et al. (1982) and Nicholls and introduced by Arakawa (1969) and adopted in many Turton (1986), who...Nicholls and Turton (1986)]. We can interpret y, as the value of x associated with the downdraft air at level B. Since there is a sharp gradient of V
New Parameterization of Neutron Absorption Cross Sections
NASA Astrophysics Data System (ADS)
Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.
1997-06-01
Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.
New Parameterization of Neutron Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.
1997-01-01
Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.
Lightning parameterization in a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Farley, Richard D.; Wu, Gang
1988-01-01
The parameterization of an intracloud lightning discharge has been implemented in our Storm Electrification Model. The initiation, propagation direction, termination and charge redistribution of the discharge are approximated assuming overall charge neutrality. Various simulations involving differing amounts of charge transferred have been done. The effects of the lightning-produced ions on the hydrometeor charges, electric field components and electrical energy depend strongly on the charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge show favorable agreement.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
A new parameterization of spectral and broadband ocean surface albedo.
Jin, Zhonghai; Qiao, Yanli; Wang, Yingjian; Fang, Yonghua; Yi, Weining
2011-12-19
A simple yet accurate parameterization of spectral and broadband ocean surface albedo has been developed. To facilitate the parameterization and its applications, the albedo is parameterized for the direct and diffuse incident radiation separately, and then each of them is further divided into two components: the contributions from surface and water, respectively. The four albedo components are independent of each other, hence, altering one will not affect the others. Such a designed parameterization scheme is flexible for any future update. Users can simply replace any of the adopted empirical formulations (e.g., the relationship between foam reflectance and wind speed) as desired without a need to change the parameterization scheme. The parameterization is validated by in situ measurements and can be easily implemented into a climate or radiative transfer model.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
Parameterization of Solar Global Uv Irradiation
NASA Astrophysics Data System (ADS)
Feister, U.; Jaekel, E.; Gericke, K.
Daily doses of solar global UV-B, UV-A, and erythemal irradiation have been param- eterized to be calculated from pyranometer data of global and diffuse irradiation as well as from atmospheric column ozone measured at Potsdam (52 N, 107 m asl). The method has been validated against independent data of measured UV irradiation. A gain of information is provided by use of the parameterization for the three UV compo- nents (UV-B, UV-A and erythemal) referring to average values of UV irradiation. Ap- plying the method to UV irradiation measured at the mountain site Hohenpeissenberg (48 N, 977 m asl) shows that the parameterization even holds under completely differ- ent climatic conditions. On a long-term average (1953 - 2000), parameterized annual UV irradiation values are by 15 % (UV-A) and 21 % (UV-B), respectively, higher at Hohenpeissenberg, than they are at Potsdam. Using measured input data from 27 Ger- man weather stations, the method has been also applied to estimate the spatial distribu- tion of UV irradiation across Germany. Daily global and diffuse irradiation measured at Potsdam (1937 -2000) as well as atmospheric column ozone measured at Potsdam between1964 - 2000 have been used to derive long-term estimates of daily and annual totals of UV irradiation that include the effects of changes in cloudiness, in aerosols and, at least for the period 1964 to 2000, also in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the volcanic eruptions of Mt. Pinatubo in 1991 have substantially enhanced UV-B irradiation in the first half of the 90ies of the last century. The non-linear long-term changes between 1968 and 2000 amount to +4% ...+5% for annual global and UV-A irradiation mainly due to changing cloudiness, and +14% ... +15% for UV-B and erythemal irradiation due to both chang- ing cloudiness and decreasing column ozone. Estimates of long-term changes in UV irradiation derived from data measured at other German sites are
A Genus Oblivious Approach to Cross Parameterization
Bennett, J C; Pascucci, V; Joy, K I
2008-06-16
In this paper we present a robust approach to construct a map between two triangulated meshes, M and M{prime} of arbitrary and possibly unequal genus. We introduce a novel initial alignment scheme that allows the user to identify 'landmark tunnels' and/or a 'constrained silhouette' in addition to the standard landmark vertices. To describe the evolution of non-landmark tunnels we automatically derive a continuous deformation from M to M{prime} using a variational implicit approach. Overall, we achieve a cross parameterization scheme that is provably robust in the sense that it can map M to M{prime} without constraints on their relative genus. We provide a number of examples to demonstrate the practical effectiveness of our scheme between meshes of different genus and shape.
Optika : a GUI framework for parameterized applications.
Nusbaum, Kurtis L.
2011-06-01
In the field of scientific computing there are many specialized programs designed for specific applications in areas such as biology, chemistry, and physics. These applications are often very powerful and extraordinarily useful in their respective domains. However, some suffer from a common problem: a non-intuitive, poorly-designed user interface. The purpose of Optika is to address this problem and provide a simple, viable solution. Using only a list of parameters passed to it, Optika can dynamically generate a GUI. This allows the user to specify parameters values in a fashion that is much more intuitive than the traditional 'input decks' used by some parameterized scientific applications. By leveraging the power of Optika, these scientific applications will become more accessible and thus allow their designers to reach a much wider audience while requiring minimal extra development effort.
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, R. Y.; Li, X. R.
2016-07-01
Deep learning is a typical learning method widely studied in machine learning, pattern recognition, and artificial intelligence. This work investigates the stellar atmospheric parameterization problem by constructing a deep neural network with five layers. The proposed scheme is evaluated on both real spectra from Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with Kurucz's New Opacity Distribution Function (NEWODF) model. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for the effective temperature (T_{eff}/K), 0.0058 for lg (T_{eff}/K), 0.1706 for surface gravity (lg (g/(cm\\cdot s^{-2}))), and 0.1294 dex for metallicity ([Fe/H]), respectively; On the theoretic spectra, the MAEs are 15.34 for T_{eff}/K, 0.0011 for lg (T_{eff}/K), 0.0214 for lg (g/(cm\\cdot s^{-2})), and 0.0121 dex for [Fe/H], respectively.
Universal Parameterization of Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Cucinotta, Francis A.; Wilson, John W.
1997-01-01
This paper presents a simple universal parameterization of total reaction cross sections for any system of colliding nuclei that is valid for the entire energy range from a few AMeV to a few AGeV. The universal picture presented here treats proton-nucleus collision as a special case of nucleus-nucleus collision, where the projectile has charge and mass number of one. The parameters are associated with the physics of the collision system. In general terms, Coulomb interaction modifies cross sections at lower energies, and the effects of Pauli blocking are important at higher energies. The agreement between the calculated and experimental data is better than all earlier published results.
Parameterized reduced order modeling of misaligned stacked disks rotor assemblies
NASA Astrophysics Data System (ADS)
Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe
2011-01-01
Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this goal at affordable computational costs. It is shown that the disks misalignment leads to significant changes in nominal system properties that manifest themselves as additional blocks coupling neighboring spatial harmonics in Fourier space. Consequently, the misalignment effects can no longer be accurately modeled as equivalent forces applied to a nominal unperturbed system. The fact that the mode shapes become heavily distorted by extra harmonic content renders the nominal modal projection-based methods inaccurate and thus numerically ineffective in the context of repeated analysis of multiple misalignment realizations. The significant numerical bottleneck is removed by employing an orthogonal projection onto the subspace spanned by first few Fourier harmonic basis vectors. The projected highly sparse systems are shown to accurately approximate the specific misalignment effects, to be inexpensive to solve using direct sparse methods and easy to parameterize with a small set of measurable eccentricity and tilt angle parameters. Selected numerical examples on an industrial scale model are presented to illustrate the accuracy and efficiency of the algorithm implementation.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models
2012-09-30
and the 6th moments. The development and testing of the parameterization was made using the CIMMS LES explicit warm rain microphysical model. The...implemented into the 3D dynamical framework of the CIMMS LES model where the errors of the parameterization were assessed in a realistic setting. The
Evaluation of a GCM cirrus parameterization using satellite observations
NASA Technical Reports Server (NTRS)
Soden, B. J.; Donner, L. J.
1994-01-01
This study applies a simple yet effective methodology to validate a general circulation model parameterization of cirrus ice water path. The methodology combines large-scale dynamic and thermodynamic fields from operational analyses with prescribed occurrence of cirrus clouds from satellite observations to simulate a global distribution of ice water path. The predicted cloud properties are then compared with the corresponding satellite measurements of visible optical depth and infrared cloud emissivity to evaluate the reliability of the parameterization. This methodology enables the validation to focus strictly on the water loading side of the parameterization by eliminating uncertainties involved in predicting the occurrence of cirrus internally within the parameterization. Overall the parameterization performs remarkably well in capturing the observed spatial patterns of cirrus optical properties. Spatial correlations between the observed and the predicted optical depths are typically greater than 0.7 for the tropics and northern hemisphere midlatitudes. The good spatial agreement largely stems from the strong dependence of the ice water path upon the temperature of the environment in which the clouds form. Poorer correlations (r approximately 0.3) are noted over the southern hemisphere midlatitudes, suggesting that additional processes not accounted for by the parameterization may be important there. Quantitative evaluation of the parameterization is hindered by the present uncertainty in the size distribution of cirrus ice particles. Consequently, it is difficult to determine if discrepancies between the observed and the predicted optical properties are attributable to errors in the parameterized ice water path or to geographic variations in effective radii.
Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite
2016-02-01
ARL-TR-7585 ● FEB 2016 US Army Research Laboratory Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite by...the Nqueens Algorithm into a Parameterized Benchmark Suite by Jamie K Infantolino and Mikayla Malley Computational and Information Sciences...
Parameterizing Size Distribution in Ice Clouds
DeSlover, Daniel; Mitchell, David L.
2009-09-25
PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice
Parameterization of Incident and Infragravity Swash Variance
NASA Astrophysics Data System (ADS)
Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.
2002-12-01
By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash
Parameterizing loop fusion for automated empirical tuning
Zhao, Y; Yi, Q; Kennedy, K; Quinlan, D; Vuduc, R
2005-12-15
Traditional compilers are limited in their ability to optimize applications for different architectures because statically modeling the effect of specific optimizations on different hardware implementations is difficult. Recent research has been addressing this issue through the use of empirical tuning, which uses trial executions to determine the optimization parameters that are most effective on a particular hardware platform. In this paper, we investigate empirical tuning of loop fusion, an important transformation for optimizing a significant class of real-world applications. In spite of its usefulness, fusion has attracted little attention from previous empirical tuning research, partially because it is much harder to configure than transformations like loop blocking and unrolling. This paper presents novel compiler techniques that extend conventional fusion algorithms to parameterize their output when optimizing a computation, thus allowing the compiler to formulate the entire configuration space for loop fusion using a sequence of integer parameters. The compiler can then employ an external empirical search engine to find the optimal operating point within the space of legal fusion configurations and generate the final optimized code using a simple code transformation system. We have implemented our approach within our compiler infrastructure and conducted preliminary experiments using a simple empirical search strategy. Our results convey new insights on the interaction of loop fusion with limited hardware resources, such as available registers, while confirming conventional wisdom about the effectiveness of loop fusion in improving application performance.
Reaction Rate Parameterization for Nuclear Astrophysics Research
NASA Astrophysics Data System (ADS)
Scott, J. P.; Lingerfelt, E. J.; Smith, M. S.; Hix, W. R.; Bardayan, D. W.; Sharp, J. E.; Kozub, R. L.; Meyer, R. A.
2004-11-01
Libraries of thermonuclear reaction rates are used in element synthesis models of a wide variety of astrophysical phenomena, such as exploding stars and the inner workings of our sun. These computationally demanding models are more efficient when libraries, which may contain over 60000 rates and vary by 20 orders of magnitude, have a uniform parameterization for all rates. We have developed an on-line tool, hosted at www.nucastrodata.org, to obtain REACLIB parameters (F.-K. Thielemann et al., Adv. Nucl. Astrophysics 525, 1 (1987)) that represent reaction rates as a function of temperature. This helps to rapidly incorporate the latest nuclear physics results in astrophysics models. The tool uses numerous techniques and algorithms in a modular fashion to improve the quality of the fits to the rates. Features, modules, and additional applications of this tool will be discussed. * Managed by UT-Battelle, LLC, for the U.S. D.O.E. under contract DE-AC05-00OR22725 + Supported by U.S. D.O.E. under Grant No. DE-FG02-96ER40955
Dynamically consistent parameterization of mesoscale eddies. Part I: Simple model
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2015-03-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced cumulative eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of the footprint strongly depend on the underlying large-scale and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Carbody structural lightweighting based on implicit parameterized model
NASA Astrophysics Data System (ADS)
Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen
2014-05-01
Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.
Brain surface conformal parameterization using Riemann surface structure.
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M; Chan, Tony F; Toga, Arthur W; Thompson, Paul M; Yau, Shing-Tung
2007-06-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks.
Parameterizing Stellar Spectra Using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing
2017-03-01
Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.
Parameterization of cloud glaciation by atmospheric dust
NASA Astrophysics Data System (ADS)
Nickovic, Slobodan; Cvetkovic, Bojan; Madonna, Fabio; Pejanovic, Goran; Petkovic, Slavko
2016-04-01
The exponential growth of research interest on ice nucleation (IN) is motivated, inter alias, by needs to improve generally unsatisfactory representation of cold cloud formation in atmospheric models, and therefore to increase the accuracy of weather and climate predictions, including better forecasting of precipitation. Research shows that mineral dust significantly contributes to cloud ice nucleation. Samples of residual particles in cloud ice crystals collected by aircraft measurements performed in the upper tropopause of regions distant from desert sources indicate that dust particles dominate over other known ice nuclei such as soot and biological particles. In the nucleation process, dust chemical aging had minor effects. The observational evidence on IN processes has substantially improved over the last decade and clearly shows that there is a significant correlation between IN concentrations and the concentrations of coarser aerosol at a given temperature and moisture. Most recently, due to recognition of the dominant role of dust as ice nuclei, parameterizations for immersion and deposition icing specifically due to dust have been developed. Based on these achievements, we have developed a real-time forecasting coupled atmosphere-dust modelling system capable to operationally predict occurrence of cold clouds generated by dust. We have been thoroughly validated model simulations against available remote sensing observations. We have used the CNR-IMAA Potenza lidar and cloud radar observations to explore the model capability to represent vertical features of the cloud and aerosol vertical profiles. We also utilized the MSG-SEVIRI and MODIS satellite data to examine the accuracy of the simulated horizontal distribution of cold clouds. Based on the obtained encouraging verification scores, operational experimental prediction of ice clouds nucleated by dust has been introduced in the Serbian Hydrometeorological Service as a public available product.
Brydegaard, Mikkel
2015-01-01
In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna) has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors. PMID:26295706
NASA Astrophysics Data System (ADS)
Ramaswamy, V.; Freidenreich, S. M.
1992-07-01
Reference radiative transfer solutions in the near-infrared spectrum, which account for the spectral absorption characteristics of the water vapor molecule and the absorbing-scattering features of water drops, are employed to investigate and develop broadband treatments of solar water vapor absorption and cloud radiative effects. The conceptually simple and widely used Lacis-Hansen parameterization for solar water vapor absorption is modified so as to yield excellent agreement in the clear sky heating rates. The problem of single cloud decks over a nonreflecting surface is used to highlight the factors involved in the development of broadband overcast sky parameterizations. Three factors warrant considerable attention: (1) the manner in which the spectrally dependent drop single-scattering values are used to obtain the broadband cloud radiative properties, (2) the effect of the spectral attenuation by the vapor above the cloud on the determination of the broadband drop reflection and transmission, and (3) the broadband treatment of the spectrally dependent absorption due to drops and vapor inside the cloud. The solar flux convergence in clouds is very sensitive to all these considerations. Ignoring effect 2 tends to overestimate the cloud heating, particularly for low clouds, while a poor treatment of effect 3 leads to an underestimate. A new parameterization that accounts for the aforementioned considerations is accurate to within ˜30% over a wide range of overcast sky conditions, including solar zenith angles and cloud characteristics (altitudes, drop models, optical depths, and geometrical thicknesses), with the largest inaccuracies occurring for geometrically thick, extended cloud systems containing large amounts of vapor. Broadband methods that treat improperly one or more of the above considerations can yield substantially higher errors (>35%) for some overcast sky conditions while having better agreements over limited portions of the parameter range. For
Parameterization of cirrus optical depth and cloud fraction
Soden, B.
1995-09-01
This research illustrates the utility of combining satellite observations and operational analysis for the evaluation of parameterizations. A parameterization based on ice water path (IWP) captures the observed spatial patterns of tropical cirrus optical depth. The strong temperature dependence of cirrus ice water path in both the observations and the parameterization is probably responsible for the good correlation where it exists. Poorer agreement is found in Southern Hemisphere mid-latitudes where the temperature dependence breaks down. Uncertainties in effective radius limit quantitative validation of the parameterization (and its inclusion into GCMs). Also, it is found that monthly mean cloud cover can be predicted within an RMS error of 10% using ECMWF relative humidity corrected by TOVS Upper Troposphere Humidity. 1 ref., 2 figs.
A simple lightning parameterization for calculating global lightning distributions
NASA Technical Reports Server (NTRS)
Price, Colin; Rind, David
1992-01-01
A simple parameterization has been developed to simulate global lightning distributions. Convective cloud top height is used as the variable in the parameterization, with different formulations for continental and marine thunderstorms. The parameterization has been validated using two lightning data sets: one global and one regional. In both cases the simulated lightning distributions and frequencies are in very good agreement with the observed lightning data. This parameterization could be used for global studies of lightning climatology; the earth's electric circuit; in general circulation models for modeling global lightning activity, atmospheric NO(x) concentrations, and perhaps forest fire distributions for both the present and future climate; and, possibly, even as a short-term forecasting aid.
... This does not cause problems most of the time. Alternative Names Adenoidectomy; Removal of adenoid glands Images Adenoid removal - series References Wetmore RF. Tonsils and adenoids. In: Kliegman ...
Parameterization of Frontal Symmetric Instabilities. I: Theory for Resolved Fronts
NASA Astrophysics Data System (ADS)
Bachman, S. D.; Fox-Kemper, B.; Taylor, J. R.; Thomas, L. N.
2017-01-01
A parameterization is proposed for the effects of symmetric instability (SI) on a resolved front. The parameterization is dependent on external forcing by surface buoyancy loss and/or down-front winds, which reduce potential vorticity (PV) and lead to conditions favorable for SI. The parameterization consists of three parts. The first part is a specification for the vertical eddy viscosity, which is derived from a specified ageostrophic circulation resulting from the balance of the Coriolis force and a Reynolds momentum flux (a turbulent Ekman balance), with a previously proposed vertical structure function for the geostrophic shear production. The vertical structure of the eddy viscosity is constructed to extract the mean kinetic energy of the front at a rate consistent with resolved SI. The second part of the parameterization represents a near-surface convective layer whose depth is determined by a previously proposed polynomial equation. The third part of the parameterization represents diffusive tracer mixing through small-scale shear instabilities and SI. The diabatic, vertical component of this diffusivity is set to be proportional to the eddy viscosity using a turbulent Prandtl number, and the along-isopycnal tracer mixing is represented by an anisotropic diffusivity tensor. Preliminary testing of the parameterization using a set of idealized models shows that the extraction of total energy of the front is consistent with that from SI-resolving LES, while yielding mixed layer stratification, momentum, and potential vorticity profiles that compare favorably to those from an extant boundary layer parameterization (Large et al., 1994). The new parameterization is also shown to improve the vertical mixing of a passive tracer in the LES.
Mcfast, a Parameterized Fast Monte Carlo for Detector Studies
NASA Astrophysics Data System (ADS)
Boehnlein, Amber S.
McFast is a modularized and parameterized fast Monte Carlo program which is designed to generate physics analysis information for different detector configurations and subdetector designs. McFast is based on simple geometrical object definitions and includes hit generation, parameterized track generation, vertexing, a muon system, electromagnetic calorimetry, and trigger framework for physics studies. Auxiliary tools include a geometry editor, visualization, and an i/o system.
Numerical Testing of Parameterization Schemes for Solving Parameter Estimation Problems
2008-12-01
1 NUMERICAL TESTING OF PARAMETERIZATION SCHEMES FOR SOLVING PARAMETER ESTIMATION PROBLEMS L. Velázquez*, M. Argáez and C. Quintero The...performance computing (HPC). 1. INTRODUCTION In this paper we present the numerical performance of three parameterization approaches, SVD...wavelets, and the combination of wavelet-SVD for solving automated parameter estimation problems based on the SPSA described in previous reports of this
A framework for parameterization of heterogeneous ocean convection
NASA Astrophysics Data System (ADS)
Ilıcak, Mehmet; Adcroft, Alistair J.; Legg, Sonya
2014-10-01
We propose a new framework for parameterization of ocean convection processes. The new framework is termed “patchy convection” since our aim is to represent the heterogeneity of mixing processes that take place within the horizontal scope of a grid cell. We focus on applying this new scheme to represent the effect of pre-conditioning for deep convection by subgrid scale eddy variability. The new parameterization separates the grid-cell into two regions of different stratification, applies convective mixing separately to each region, and then recombines the density profile to produce the grid-cell mean density profile. The scheme depends on two parameters: the areal fraction of the vertically-mixed region within the horizontal grid cell, and the density difference between the mean and the unstratified profiles at the surface. We parameterize this density difference in terms of an unresolved eddy kinetic energy. We illustrate the patchy parameterization using a 1D idealized convection case before evaluating the scheme in two different global ocean-ice simulations with prescribed atmospheric forcing; (i) diagnosed eddy velocity field applied only in the Labrador Sea (ii) diagnosed global eddy velocity field. The global simulation results indicate that the patchy convection scheme improves the warm biases in the deep Atlantic Ocean and Southern Ocean. This proof-of-concept study is a first step in developing the patchy parameterization scheme, which will be extended in future to use a prognostic eddy field as well as to parameterize convection due to under-ice brine rejection.
Partwise cross-parameterization via nonregular convex hull domains.
Wu, Huai-Yu; Pan, Chunhong; Zha, Hongbin; Yang, Qing; Ma, Songde
2011-10-01
In this paper, we propose a novel partwise framework for cross-parameterization between 3D mesh models. Unlike most existing methods that use regular parameterization domains, our framework uses nonregular approximation domains to build the cross-parameterization. Once the nonregular approximation domains are constructed for 3D models, different (and complex) input shapes are transformed into similar (and simple) shapes, thus facilitating the cross-parameterization process. Specifically, a novel nonregular domain, the convex hull, is adopted to build shape correspondence. We first construct convex hulls for each part of the segmented model, and then adopt our convex-hull cross-parameterization method to generate compatible meshes. Our method exploits properties of the convex hull, e.g., good approximation ability and linear convex representation for interior vertices. After building an initial cross-parameterization via convex-hull domains, we use compatible remeshing algorithms to achieve an accurate approximation of the target geometry and to ensure a complete surface matching. Experimental results show that the compatible meshes constructed are well suited for shape blending and other geometric applications.
Faster Parameterized Algorithms for Minor Containment
NASA Astrophysics Data System (ADS)
Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.
The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.
NASA Astrophysics Data System (ADS)
Croft, B.; Martin, R.; Lohmann, U.; Pierce, J. R.
2013-12-01
Wet scavenging processes strongly control aerosol three-dimensional distributions. In this study, we quantify the uncertainty in global simulations of aerosol vertical profiles and lifetimes, which may be attributed to uncertainties in both convective and stratiform wet scavenging parameterizations. For convective clouds, we show that different assumptions about the wet removal of aerosols entrained above convective cloud bases can yield differences of about one order of magnitude in middle and upper tropospheric aerosol concentrations. For stratiform clouds, we demonstrate the impact of size-dependent aerosol wet scavenging as compared to the use of fixed prescribed scavenging coefficients. We quantify the difference in simulated aerosol concentrations, particularly at high latitudes, yielded by different assumptions about scavenging in mixed phase and ice clouds. We also examine the sensitivity of simulated global mean aerosol lifetimes to parameterizations for wet scavenging. Global simulations of the scavenging of aerosol-bound radionuclides following the Fukushima Dai-Ichi nuclear power plant accident are also presented. The simulated radionuclide lifetimes are compared to measurements. We present an interpretation of these constraints on global mean aerosol lifetimes. The sensitivity of simulated aerosol-bound radionuclide lifetimes to altitude and location of the radionuclide injection is also examined with consideration to the interplay of aerosol transport, mixing, and removal processes.
2013-09-30
Seasonal Prediction: An LES/ SCM Parameterization Test-Bed Joao Teixeira Jet Propulsion Laboratory California Institute of Technology, MS 169-237...a Single Column Model ( SCM ) version of the latest operational NAVGEM that can be used to simulate GEWEX Cloud Systems Study (GCSS) case-studies; ii...use the NAVGEM SCM and the LES model as a parameterization test-bed. APPROACH It is well accepted that sub-grid physical processes such as
Paluszkiewicz, T.; Hibler, L.F.; Romea, R.D.
1995-01-01
The current generation of ocean general circulation models (OGCMS) uses a convective adjustment scheme to remove static instabilities and to parameterize shallow and deep convection. In simulations used to examine climate-related scenarios, investigators found that in the Arctic regions, the OGCM simulations did not produce a realistic vertical density structure, did not create the correct quantity of deep water, and did not use a time-scale of adjustment that is in agreement with tracer ages or observations. A possible weakness of the models is that the convective adjustment scheme does not represent the process of deep convection adequately. Consequently, a penetrative plume mixing scheme has been developed to parameterize the process of deep open-ocean convection in OGCMS. This new deep convection parameterization was incorporated into the Semtner and Chervin (1988) OGCM. The modified model (with the new parameterization) was run in a simplified Nordic Seas test basin: under a cyclonic wind stress and cooling, stratification of the basin-scale gyre is eroded and deep mixing occurs in the center of the gyre. In contrast, in the OGCM experiment that uses the standard convective adjustment algorithm, mixing is delayed and is wide-spread over the gyre.
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
NASA Astrophysics Data System (ADS)
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
Optimal Aerosol Parameterization for Remote Sensing Retrievals
NASA Technical Reports Server (NTRS)
Newchurch, Michael J.
2004-01-01
discrepancy in the lower stratosphere is attributable to natural variation, and is also seen in comparisons between lidar and ozonesonde measurements. NO2 profiles obtained with our algorithm were compared to those obtained through the SAGE III operational algorithm and exhibited differences of 20 - 40%. Our retrieved profiles agree with the HALOE NO2 measurements significantly better than those of the operational retrieval. In other work (described below), we are extending our aerosol retrievals into the infrared regime and plan to perform retrievals from combined uv-visible-infrared spectra. This work will allow us to use the spectra to derive the size and composition of aerosols, and we plan to employ our algorithms in the analysis of PSC spectra. We are presently also developing a limb-scattering algorithm to retrieve aerosol data from limb measurements of solar scattered radiation.
NASA Astrophysics Data System (ADS)
Gladish, James C.; Duncan, Donald D.
2016-05-01
Liquid crystal variable retarders (LCVRs) are computer-controlled birefringent devices that contain nanometer-sized birefringent liquid crystals (LCs). These devices impart retardance effects through a global, uniform orientation change of the LCs, which is based on a user-defined drive voltage input. In other words, the LC structural organization dictates the device functionality. The LC structural organization also produces a spectral scatter component which exhibits an inverse power law dependence. We investigate LC structural organization by measuring the voltage-dependent LC spectral scattering signature with an integrating sphere and then relate this observable to a fractal-Born model based on the Born approximation and a Von Kármán spectrum. We obtain LCVR light scattering spectra at various drive voltages (i.e., different LC orientations) and then parameterize LCVR structural organization with voltage-dependent correlation lengths. The results can aid in determining performance characteristics of systems using LCVRs and can provide insight into interpreting structural organization measurements.
Parameterization of 3D brain structures for statistical shape analysis
NASA Astrophysics Data System (ADS)
Zhu, Litao; Jiang, Tianzi
2004-05-01
Statistical Shape Analysis (SSA) is a powerful tool for noninvasive studies of pathophysiology and diagnosis of brain diseases. It also provides a shape constraint for the segmentation of brain structures. There are two key problems in SSA: the representation of shapes and their alignments. The widely used parameterized representations are obtained by preserving angles or areas and the alignments of shapes are achieved by rotating parameter net. However, representations preserving angles or areas do not really guarantee the anatomical correspondence of brain structures. In this paper, we incorporate shape-based landmarks into parameterization of banana-like 3D brain structures to address this problem. Firstly, we get the triangulated surface of the object and extract two landmarks from the mesh, i.e. the ends of the banana-like object. Then the surface is parameterized by creating a continuous and bijective mapping from the surface to a spherical surface based on a heat conduction model. The correspondence of shapes is achieved by mapping the two landmarks to the north and south poles of the sphere and using an extracted origin orientation to select the dateline during parameterization. We apply our approach to the parameterization of lateral ventricle and a multi-resolution shape representation is obtained by using the Discrete Fourier Transform.
Compositional space parameterization for general multi-component multiphase systems
NASA Astrophysics Data System (ADS)
Voskov, Denis; Tchelepi, Hamdi
2007-11-01
We present a general parameterization of the thermodynamic behavior of multiphase, multi-component systems. The phase behavior in the compositional space is represented using a low dimensional tie-simplex parameterization. This parameterization improves the robustness of the phase behavior representation as well as the efficiency of various types of compositional computations. We demonstrate this Compositional Space Parameterization (CSP) framework for large-scale compositional reservoir simulation. In the standard compositional simulation approach, an Equation of State (EoS) is used to detect the phase state and calculate the phase compositions, if needed. These EoS computations can dominate the overall simulation cost. We compare our adaptive CSP approach with standard EoS based simulation for several challenging problems of practical interest. The comparisons indicate that the CSP strategy is more robust, and computational efficient. Another type of applications is an equilibrium flash calculation of systems with a large number of phases. The complexity and strong nonlinear behaviors associated with such problems pose serious difficulties for standard techniques. Here, we describe an effective tie-simplex parameterization for such systems at a fixed pressure and temperature. The preprocessed data can be used in conventional EoS based calculations as an initial guess to accelerate convergence.
Meshless thin-shell simulation based on global conformal parameterization.
Guo, Xiaohu; Li, Xin; Bao, Yunfan; Gu, Xianfeng; Qin, Hong
2006-01-01
This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via Moving Least Squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields.
Cloud-radiation interactions and their parameterization in climate models
1994-11-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18--20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the. themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth`s surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Development of a hybrid cloud parameterization for general circulation models
Kao, C.Y.J.; Kristjansson, J.E.; Langley, D.L.
1995-04-01
We have developed a cloud package with state-of-the-art physical schemes that can parameterize low-level stratus or stratocumulus, penetrative cumulus, and high-level cirrus. Such parameterizations will improve cloud simulations in general circulation models (GCMs). The principal tool in this development comprises the physically based Arakawa-Schubert scheme for convective clouds and the Sundqvist scheme for layered, nonconvective clouds. The term {open_quotes}hybrid{close_quotes} addresses the fact that the generation of high-attitude layered clouds can be associated with preexisting convective clouds. Overall, the cloud parameterization package developed should better determine cloud heating and drying effects in the thermodynamic budget, realistic precipitation patterns, cloud coverage and liquid/ice water content for radiation purposes, and the cloud-induced transport and turbulent diffusion for atmospheric trace gases.
Cloud-radiation interactions and their parameterization in climate models
NASA Technical Reports Server (NTRS)
1994-01-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Parameterized reduced-order models using hyper-dual numbers.
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.
Parameterization of and Brine Storage in MOR Hydrothermal Systems
NASA Astrophysics Data System (ADS)
Hoover, J.; Lowell, R. P.; Cummings, K. B.
2009-12-01
Single-pass parameterized models of high-temperature hydrothermal systems at oceanic spreading centers use observational constraints such as vent temperature, heat output, vent field area, and the area of heat extraction from the sub-axial magma chamber to deduce fundamental hydrothermal parameters such as total mass flux Q, bulk permeability k, and the thickness of the conductive boundary layer at the base of the system, δ. Of the more than 300 known systems, constraining data are available for less than 10%. Here we use the single pass model to estimate Q, k, and δ for all the seafloor hydrothermal systems for which the constraining data are available. Mean values of Q, k, and δ are 170 kg/s, 5.0x10-13 m2, and 20 m, respectively; which is similar to results obtained from the generic model. There is no apparent correlation with spreading rate. Using observed vent field lifetimes, the rate of magma replenishment can also be calculated. Essentially all high-temperature hydrothermal systems at oceanic spreading centers undergo phase separation, yielding a low chlorinity vapor and a high salinity brine. Some systems such as the Main Endeavour Field on the Juan de Fuca Ridge and the 9°50’N sites on the East Pacific Rise vent low chlorinity vapor for many years, while the high density brine remains sequestered beneath the seafloor. In an attempt to further understand the brine storage at the EPR, we used the mass flux Q determined above, time series of vent salinity and temperature, and the depth of the magma chamber to determine the rate of brine production at depth. We found thicknesses ranging from 0.32 meters to ~57 meters over a 1 km2 area from 1994-2002. These calculations suggest that brine maybe being stored within the conductive boundary layer without a need for lateral transport or removal by other means. We plan to use the numerical code FISHES to further test this idea.
Evaluating gas transfer velocity parameterizations using upper ocean radon distributions
NASA Astrophysics Data System (ADS)
Bender, Michael L.; Kinter, Saul; Cassar, Nicolas; Wanninkhof, Rik
2011-02-01
Sea-air fluxes of gases are commonly calculated from the product of the gas transfer velocity (k) and the departure of the seawater concentration from atmospheric equilibrium. Gas transfer velocities, generally parameterized in terms of wind speed, continue to have considerable uncertainties, partly because of limited field data. Here we evaluate commonly used gas transfer parameterizations using a historical data set of 222Rn measurements at 105 stations occupied on Eltanin cruises and the Geosecs program. We make this evaluation with wind speed estimates from meteorological reanalysis products (from National Centers for Environmental Prediction and European Centre for Medium-Range Weather Forecasting) that were not available when the 22Rn data were originally published. We calculate gas transfer velocities from the parameterizations by taking into account winds in the period prior to the date that 222Rn profiles were sampled. Invoking prior wind speed histories leads to much better agreement than simply calculating parameterized gas transfer velocities from wind speeds on the day of sample collection. For individual samples from the Atlantic Ocean, where reanalyzed winds agree best with observations, three similar recent parameterizations give k values for individual stations with an rms difference of ˜40% from values calculated using 222Rn data. Agreement of basin averages is much better. For the global data set, the average difference between k constrained by 222Rn and calculated from the various parameterizations ranges from -0.2 to +0.9 m/d (average, 2.9 m/d). Averaging over large domains, and working with gas data collected in recent years when reanalyzed winds are more accurate, will further decrease the uncertainties in sea-air fluxes.
Waves and Instabilities for Model Tropical Convective Parameterizations.
NASA Astrophysics Data System (ADS)
Majda, Andrew J.; Shefter, Michael G.
2001-04-01
Models of the tropical atmosphere with crude vertical resolution are important as intermediate models for understanding convectively coupled wave hierarchies and also as simplified models for studying various strategies for parameterizing convection and convectively coupled waves. Simplified models are utilized in a detailed analytical study of the waves and instabilities for model convective parameterizations. Three convection schemes are analyzed: a strict quasi-equilibrium (QE) scheme and two schemes that attempt to model the departures from quasi equilibrium by including the shorter timescale effects of penetrative convection, the Lagrangian parcel adjustment (LPA) scheme and a new instantaneous convective available potential energy (CAPE) adjustment (ICAPE) scheme. Unlike the QE parameterization scheme, both the LPA and ICAPE schemes have scale-selective finite bands of unstable wavelengths centered around typical cluster and supercluster scales with virtually identical growth rates and wave structure. However, the LPA scheme has, in addition, two nonphysical superfast parasitic waves that are artifacts of this parameterization while such waves are completely absent in the new ICAPE parameterization.Another topic studied here is the fashion in which an imposed barotropic mean wind triggers a transition to instability in the Tropics through suitable convectively coupled waves; this is the simplest analytical problem for studying the influence of midlatitudes on convectively coupled waves. For an easterly barotropic mean flow with the effect of rotation included, both supercluster-scale moist Kelvin waves and cluster-scale moist mixed Rossby-gravity waves participate in the transition to instability. The wave and stability properties of the ICAPE parameterization with rotation are studied through a novel procedure involving complete zonal resolution but low-order meridional truncation. Besides moist Kelvin, mixed Rossby-gravity, and equatorial Rossby waves, this
NASA Astrophysics Data System (ADS)
Hayek, Mohamed; Ackerer, Philippe; Sonnendrücker, Éric
2009-02-01
We propose a new refinement indicator (NRI) for adaptive parameterization to determine the diffusion coefficient in an elliptic equation in two-dimensional space. The diffusion coefficient is assumed to be a piecewise constant space function. The unknowns are both the parameter values and the zonation. Refinement indicators are used to localize parameter discontinuities in order to construct iteratively the zonation (parameterization). The refinement indicator is obtained usually by using the first-order effect on the objective function of removing degrees of freedom for a current set of parameters. In this work, in order to reduce the computation costs, we propose a new refinement indicator based on the second-order effect on the objective function. This new refinement indicator depends on the objective function, and its first and second derivatives with respect to the parameter constraints. Numerical experiments show the high efficiency of the new refinement indicator compared to the standard one.
On-Line Construction of Parameterized Suffix Trees
NASA Astrophysics Data System (ADS)
Lee, Taehyung; Na, Joong Chae; Park, Kunsoo
We consider on-line construction of a suffix tree for a parameterized string, where we always have the suffix tree of the input string read so far. This situation often arises from source code management systems where, for example, a source code repository is gradually increasing in its size as users commit new codes into the repository day by day. We present an on-line algorithm which constructs a parameterized suffix tree in randomized O(n) time, where n is the length of the input string. Our algorithm is the first randomized linear time algorithm for the on-line construction problem.
Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying
2015-01-01
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they
Value of Bulk Heat Flux Parameterizations for Ocean SST Prediction
2008-03-01
Value of bulk heat flux parameterizations for ocean SST prediction Alan J. Wallcraft a,⁎, A. Birol Kara a, Harley E. Hurlburt a, Eric P. Chassignet b...G., Doney, S.C., McWilliams , J.C., 1997. Sensitivity to surface forcing and boundary layer mixing in a global ocean model: annual-mean climatology. J
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, G.; Zhang, R.; Tie, X.; Molina, L. T.
2013-05-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere and thence the representation of the HONO sources in chemical transport models (CTMs) is lack of comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, Guohui; Zhang, Renyi; Tie, Xuxie; Molina, Luisa
2013-04-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere; hence the representation of the HONO sources in chemical transport models (CTMs) has lack comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
Parameterization of Movement Execution in Children with Developmental Coordination Disorder
ERIC Educational Resources Information Center
Van Waelvelde, Hilde; De Weerdt, Willy; De Cock, Paul; Janssens, Luc; Feys, Hilde; Engelsman, Bouwien C. M. Smits
2006-01-01
The Rhythmic Movement Test (RMT) evaluates temporal and amplitude parameterization and fluency of movement execution in a series of rhythmic arm movements under different sensory conditions. The RMT was used in combination with a jumping and a drawing task, to evaluate 36 children with Developmental Coordination Disorder (DCD) and a matched…
The Project for Intercomparison of Land-surface Parameterization Schemes
NASA Technical Reports Server (NTRS)
Henderson-Sellers, A.; Yang, Z.-L.; Dickinson, R. E.
1993-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) is described and the first stage science plan outlined. PILPS is a project designed to improve the parameterization of the continental surface, especially the hydrological, energy, momentum, and carbon exchanges with the atmosphere. The PILPS Science Plan incorporates enhanced documentation, comparison, and validation of continental surface parameterization schemes by community participation. Potential participants include code developers, code users, and those who can provide datasets for validation and who have expertise of value in this exercise. PILPS is an important activity because existing intercomparisons, although piecemeal, demonstrate that there are significant differences in the formulation of individual processes in the available land surface schemes. These differences are comparable to other recognized differences among current global climate models such as cloud and convection parameterizations. It is also clear that too few sensitivity studies have been undertaken with the result that there is not yet enough information to indicate which simplifications or omissions are important for the near-surface continental climate, hydrology, and biogeochemistry. PILPS emphasizes sensitivity studies with and intercomparisons of existing land surface codes and the development of areally extensive datasets for their testing and validation.
Validation of an Urban Parameterization in a Mesoscale Model
Leach, M.J.; Chin, H.
2001-07-19
The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.
A new framework for parameterization of heterogeneous ocean convection
NASA Astrophysics Data System (ADS)
Ilicak, M.; Adcroft, A.; Legg, S.
2014-12-01
We propose a new framework for parameterization of ocean convection processes. The new framework is termed ''patchy convection'' since our aim is to represent the heterogeneity of mixing processes that take place within the horizontal scope of a grid cell. We focus on applying this new scheme to represent the effect of pre-conditioning for deep convection by subgrid scale eddy variability. The new scheme relies on mesoscale eddy kinetic energy field. We illustrate the patchy parameterization using a 1D idealized convection case. Next, the scheme is compared against observations. We employed the 1D case using the summer time ARGO floats from the Labrador Sea as initial conditions. We used ECMWF reanalysis atmospheric forcing and compared our results to the winter time ARGO floats. Finally we evaluate the scheme in two different global ocean-ice simulations with prescribed atmospheric forcing (CORE-I); (i) diagnosed eddy velocity field applied only in the Labrador Sea (ii) diagnosed global eddy velocity field. The global simulation results indicate that the patchy convection scheme improves the warm biases in the deep Atlantic Ocean and Southern Ocean. This proof-of-concept study is a first step in developing the patchy parameterization scheme, which will be extended in future to use a prognostic eddy field as well as to parameterize convection due to under-ice brine rejection. This study is funded through the CPT 2: Ocean Mixing Processes Associated with High Spatial Heterogeneity in Sea Ice and the Implications for Climate Models.
A new for parameterization of heterogeneous ocean convection
NASA Astrophysics Data System (ADS)
Ilicak, Mehmet; Adcroft, Alistair; Legg, Sonya
2015-04-01
We propose a new framework for parameterization of ocean convection processes. The new framework is termed patchy convection. Our aim is to represent the heterogeneity of mixing processes that take place within the horizontal scope of a grid cell. This new scheme is to represent the effect of preconditioning for deep convection by sub-grid scale eddy variability. The new parameterization separates the grid-cell into two regions of different stratification, applies convective mixing separately to each region, and then recombines the density profile to produce the grid-cell mean density profile. The scheme depends on two parameters: the areal fraction of the vertically-mixed region within the horizontal grid cell, and the density difference between the mean and the unstratified profiles at the surface. We parameterize this density difference in terms of an unresolved eddy kinetic energy. We illustrate the patchy parameterization using a 1D idealized convection case before evaluating the scheme in two different global ocean-ice simulations with prescribed atmospheric forcing; i) diagnosed eddy velocity field applied only in the Labrador Sea ii) diagnosed global eddy velocity field. The global simulation results indicate that the patchy convection scheme improves the warm biases in the deep Atlantic Ocean and Southern Ocean.
Overview of an Urban Canopy Parameterization in COAMPS
Leach, M J; Chin, H S
2006-02-09
The Coupled Atmosphere/Ocean Mesoscale Prediction System (COAMPS) model (Hodur, 1997) was developed at the Naval Research Laboratory. COAMPS has been used at resolutions as small as 2 km to study the role of complex topography in generating mesoscale circulation (Doyle, 1997). The model has been adapted for use in the Atmospheric Science Division at LLNL for both research and operational use. The model is a fully, non-hydrostatic model with several options for turbulence parameterization, cloud processes and radiative transfer. We have recently modified the COAMPS code to include building and other urban surfaces effects in the mesoscale model by incorporating an urban canopy parameterization (UCP) (Chin et al., 2005). This UCP is a modification of the original parameterization of (Brown and Williams, 1998), based on Yamada's (1982) forest canopy parameterization and includes modification of the TKE and mean momentum equations, modification of radiative transfer, and an anthropogenic heat source. COAMPS is parallelized for both shared memory (OpenMP) and distributed memory (MPI) architecture.
CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW.
LIU,Y.; DAUM,P.H.; CHAI,S.K.; LIU,F.
2002-02-12
This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments.
Formulation structure of the mass-flux convection parameterization
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2014-09-01
Structure of the mass-flux convection parameterization formulation is re-examined. Many of the equations associated with this formulation are derived in systematic manner with various intermediate steps explicitly presented. The nonhydrostatic anelastic model (NAM) is taken as a starting point of all the derivations. Segmentally constant approximation (SCA) is a basic geometrical constraint imposed on a full system (e.g., NAM) as a first step for deriving the mass-flux formulation. The standard mass-flux convection parameterization, as originally formulated by Ooyama, Fraedrich, Arakawa and Schubert, is re-derived under the two additional hypotheses concerning entrainment-detrainment and environment, and an asymptotic limit of vanishing areas occupied by convection. A model derived at each step of the deduction constitutes a stand-alone subgrid-scale representation by itself, leading to a hierarchy of subgrid-scale schemes. A backward tracing of this deduction process provides paths for generalizing mass-flux convection parameterization. Issues of the high-resolution limit for parameterization are also understood as those of relaxing various traditional constraints. The generalization presented herein can include various other subgrid-scale processes under a mass-flux framework.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...
Authalic parameterization of general surfaces using Lie advection.
Zou, Guangyu; Hu, Jiaxi; Gu, Xianfeng; Hua, Jing
2011-12-01
Parameterization of complex surfaces constitutes a major means of visualizing highly convoluted geometric structures as well as other properties associated with the surface. It also enables users with the ability to navigate, orient, and focus on regions of interest within a global view and overcome the occlusions to inner concavities. In this paper, we propose a novel area-preserving surface parameterization method which is rigorous in theory, moderate in computation, yet easily extendable to surfaces of non-disc and closed-boundary topologies. Starting from the distortion induced by an initial parameterization, an area restoring diffeomorphic flow is constructed as a Lie advection of differential 2-forms along the manifold, which yields equality of the area elements between the domain and the original surface at its final state. Existence and uniqueness of result are assured through an analytical derivation. Based upon a triangulated surface representation, we also present an efficient algorithm in line with discrete differential modeling. As an exemplar application, the utilization of this method for the effective visualization of brain cortical imaging modalities is presented. Compared with conformal methods, our method can reveal more subtle surface patterns in a quantitative manner. It, therefore, provides a competitive alternative to the existing parameterization techniques for better surface-based analysis in various scenarios.
Compositional Space Parameterization Approach for Reservoir Flow Simulation
NASA Astrophysics Data System (ADS)
Voskov, D.
2011-12-01
Phase equilibrium calculations are the most challenging part of a compositional flow simulation. For every gridblock and at every time step, the number of phases and their compositions must be computed for the given overall composition, temperature, and pressure conditions. The conventional approach used in petroleum industry is based on performing a phase-stability test, and solving the fugacity constraints together with the coupled nonlinear flow equations when the gridblock has more than one phase. The multi-phase compositional space can be parameterized in terms of tie-simplexes. For example, a tie-triangle can be used such that its interior encloses the three-phase region, and the edges represent the boundary with specific two-phase regions. The tie-simplex parameterization can be performed for pressure, temperature, and overall composition. The challenge is that all of these parameters can change considerably during the course of a simulation. It is possible to prove that the tie-simplexes change continuously with respect to pressure, temperature, and overall composition. The continuity of the tie-simplex parameterization allows for interpolation using discrete representations of the tie-simplex space. For variations of composition, a projection to the nearest tie-simplex is used, and if the tie-simplex is within a predefined tolerance, it can be used directly to identify the phase-state of this composition. In general, our parameterization approach can be seen as the generalization of negative flash idea for systems with two or more phases. Theory of dispersion-free compositional displacements, as well as computational experience of general-purpose compositional flow simulation indicates that the displacement path in compositional space is determined by a limited number of tie-simplexes. Therefore, only few tie-simplex tables are required to parameterize the entire displacement. The small number of tie-simplexes needed in a course of a simulation motivates
... removal; Basal cell cancer - removal; Actinic keratosis - removal; Wart - removal; Squamous cell - removal; Mole - removal; Nevus - removal; ... can remove: Benign or pre-malignant skin lesions Warts Moles Sunspots Hair Small blood vessels in the ...
NASA Astrophysics Data System (ADS)
Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.
2014-12-01
Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter
A parameterization of effective soil temperature for microwave emission
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Schmugge, T. J.; Mo, T. (Principal Investigator)
1981-01-01
A parameterization of effective soil temperature is discussed, which when multiplied by the emissivity gives the brightness temperature in terms of surface (T sub o) and deep (T sub infinity) soil temperatures as T = T sub infinity + C (T sub o - T sub infinity). A coherent radiative transfer model and a large data base of observed soil moisture and temperature profiles are used to calculate the best-fit value of the parameter C. For 2.8, 6.0, 11.0, 21.0 and 49.0 cm wavelengths. The C values are respectively 0.802 + or - 0.006, 0.667 + or - 0.008, 0.480 + or - 0.010, 0.246 + or - 0.009, and 0,084 + or - 0.005. The parameterized equation gives results which are generally within one or two percent of the exact values.
Development and Evaluation of a Stochastic Cloud-radiation Parameterization
NASA Astrophysics Data System (ADS)
Veron, D. E.; Secora, J.; Foster, M.
2004-12-01
Previous studies have shown that a stochastic cloud-radiation model accurately represents the domain-averaged shortwave fluxes when compared to observations. Using continuously sampled cloud property observations from the three Atmospheric Radiation Measurement (ARM) Program's Clouds and Radiation Testbed (CART) sites, we run a multiple-layer stochastic model and compare the results to that of the single-layer version of the model used in previous studies. In addition, we compare both to plane parallel model output and independent observations. We will use these results to develop a shortwave cloud-radiation parameterization that will incorporate the influence of the stochastic approach on the calculated radiative fluxes. Initial results using this resulting parameterization in a single-column model will be shown.
On Parameterization of the Global Electric Circuit Generators
NASA Astrophysics Data System (ADS)
Slyunyaev, N. N.; Zhidkov, A. A.
2016-08-01
We consider the problem of generator parameterization in the global electric circuit (GEC) models. The relationship between the charge density and external current density distributions inside a thundercloud is studied using a one-dimensional description and a three-dimensional GEC model. It is shown that drastic conductivity variations in the vicinity of the cloud boundaries have a significant impact on the structure of the charge distribution inside the cloud. Certain restrictions on the charge density distribution in a realistic thunderstorm are found. The possibility to allow for conductivity inhomogeneities in the thunderstorm regions by introducing an effective external current density is demonstrated. Replacement of realistic thunderstorms with equivalent current dipoles in the GEC models is substantiated, an equation for the equivalent current is obtained, and the applicability range of this equation is analyzed. Relationships between the main GEC characteristics under variable parameterization of GEC generators are discussed.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION.
LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.
2005-05-16
A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics.
Parameterized Complexity of k-Anonymity: Hardness and Tractability
NASA Astrophysics Data System (ADS)
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri
The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.
Invariant box[endash]parameterization of neutrino oscillations
Weiler, T.J. ); Wagner, D. )
1998-10-01
The model-independent [open quotes]box[close quotes] parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing[endash]matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n[ge]3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. [copyright] [ital 1998 American Institute of Physics.
Invariant box{endash}parameterization of neutrino oscillations
Weiler, T.J.; Wagner, D.
1998-10-01
The model-independent {open_quotes}box{close_quotes} parameterization of neutrino oscillations is examined. The invariant boxes are the classical amplitudes of the individual oscillating terms. Being observables, the boxes are independent of the choice of parameterization of the mixing matrix. Emphasis is placed on the relations among the box parameters due to mixing{endash}matrix unitarity, and on the reduction of the number of boxes to the minimum basis set. Using the box algebra, we show that CP-violation may be inferred from measurements of neutrino flavor mixing even when the oscillatory factors have averaged. General analyses of neutrino oscillations among n{ge}3 flavors can readily determine the boxes, which can then be manipulated to yield magnitudes of mixing matrix elements. {copyright} {ital 1998 American Institute of Physics.}
Parameterized neural networks for high-energy physics
NASA Astrophysics Data System (ADS)
Baldi, Pierre; Cranmer, Kyle; Faucett, Taylor; Sadowski, Peter; Whiteson, Daniel
2016-05-01
We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.
An intracloud lightning parameterization scheme for a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.
1992-01-01
The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.
NASA Astrophysics Data System (ADS)
Mitchell, D. L.
2006-12-01
Sometimes deep physical insights can be gained through the comparison of two theories of light scattering. Comparing van de Hulst's anomalous diffraction approximation (ADA) with Mie theory yielded insights on the behavior of the photon tunneling process that resulted in the modified anomalous diffraction approximation (MADA). (Tunneling is the process by which radiation just beyond a particle's physical cross-section may undergo large angle diffraction or absorption, contributing up to 40% of the absorption when wavelength and particle size are comparable.) Although this provided a means of parameterizing the tunneling process in terms of the real index of refraction and size parameter, it did not predict the efficiency of the tunneling process, where an efficiency of 100% is predicted for spheres by Mie theory. This tunneling efficiency, Tf, depends on particle shape and ranges from 0 to 1.0, with 1.0 corresponding to spheres. Similarly, by comparing absorption efficiencies predicted by the Finite Difference Time Domain Method (FDTD) with efficiencies predicted by MADA, Tf was determined for nine different ice particle shapes, including aggregates. This comparison confirmed that Tf is a strong function of ice crystal shape, including the aspect ratio when applicable. Tf was lowest (< 0.36) for aggregates and plates, and largest (> 0.9) for quasi- spherical shapes. A parameterization of Tf was developed in terms of (1) ice particle shape and (2) mean particle size regarding the large mode (D > 70 mm) of the ice particle size distribution. For the small mode, Tf is only a function of ice particle shape. When this Tf parameterization is used in MADA, absorption and extinction efficiency differences between MADA and FDTD are within 14% over the terrestrial wavelength range 3-100 mm for all size distributions and most crystal shapes likely to be found in cirrus clouds. Using hyperspectral radiances, it is demonstrated that Tf can be retrieved from ice clouds. Since Tf
Parameterization of Outgoing Infrared Radiation Derived from Detailed Radiative Calculations.
NASA Astrophysics Data System (ADS)
Thompson, Starley L.; Warren, Stephen G.
1982-12-01
State-of-the-art radiative transfer models can calculate outgoing infrared (IR) irradiance at the top of the atmosphere (F) to an accuracy suitable for climate modeling given the proper atmospheric profiles of temperature and absorbing gases and aerosols. However, such sophisticated methods are computationally time consuming and ill-suited for simple vertically-averaged models or diagnostic studies. The alternative of empirical expressions for F is plagued by observational uncertainty which forces the functional forms to be very simple. We develop, a parameterization of climatological F by curve-fitting the results of a detailed radiative transfer model. The parameterization comprises clear-sky and cloudy-sky terms. Only two parameters are used to predict clear-sky outgoing IR irradiance: surface air temperature (Ts) and 0-12 km height-mean relative humidity (RH). With this choice of parameters (in particular, the use of RH instead of precipitable water) the outgoing IR irradiance can be estimated without knowledge of the detailed temperature profile or average lapse rate. Comparisons between the clear-sky parameterization and detailed model show maximum errors of 10 W m2 with average errors of only a few watts per square meter. Single-layer `black' clouds are found to reduce the outgoing IR irradiance (relative to clear-sky values) as a function of Ts Tc, Tc and RH, where Tc is the cloud-top temperature. Errors in the parameterization of the cloudy-sky term are comparable to those of the clear-sky term.
A framework for understanding drag parameterizations for coral reefs
NASA Astrophysics Data System (ADS)
Rosman, Johanna H.; Hench, James L.
2011-08-01
In a hydrodynamic sense, a coral reef is a complex array of obstacles that exerts a net drag force on water moving over the reef. This drag is typically parameterized in ocean circulation models using drag coefficients (CD) or roughness length scales (z0); however, published CD for coral reefs span two orders of magnitude, posing a challenge to predictive modeling. Here we examine the reasons for the large range in reported CD and assess the limitations of using CD and z0 to parameterize drag on reefs. Using a formal framework based on the 3-D spatially averaged momentum equations, we show that CD and z0 are functions of canopy geometry and velocity profile shape. Using an idealized two-layer model, we illustrate that CD can vary by more than an order of magnitude for the same geometry and flow depending on the reference velocity selected and that differences in definition account for much of the range in reported CD values. Roughness length scales z0 are typically used in 3-D circulation models to adjust CD for reference height, but this relies on spatially averaged near-bottom velocity profiles being logarithmic. Measurements from a shallow backreef indicate that z0 determined from fits to point measurements of velocity profiles can be very different from z0 required to parameterize spatially averaged drag. More sophisticated parameterizations for drag and shear stresses are required to simulate 3-D velocity fields over shallow reefs; in the meantime, we urge caution when using published CD and z0 values for coral reefs.
Aircraft Observations for Improved Physical Parameterization for Seasonal Prediction
2013-09-30
goals of our research are to understand and parameterize the physics of air- sea interaction and the Marine Atmospheric Boundary Layer (MABL) over a...wide spectrum of wind speeds, sea state and cloud coverage. OBJECTIVES The objective of this effort is to obtain extensive measurements in the...aircraft measurements in the MABL off Monterey Bay under various cloud fraction conditions in the summer of 2012. We used the CIRPAS Twin Otter (TO
Aircraft Observations for Improved Physical Parameterization for Seasonal Prediction
2013-09-30
the CIRPAS Twin Otter , which occurred in August/September 2012, will be referred to as Unified Physical Parameterization for Extended Forecast 2012...identical pairs of customized pyranometers and pyrgeometers were mounted on the top and bottom of the CIRPAS Twin Otter aircraft to directly measure...irradiance measured on each flight of the CIRPAS Twin Otter aircraft during UPPEF were submitted to the NPS UPPEF data archive. RESULTS UPPEF
Improved CART Data Products and 6cmm Parameterization for Clouds
Kenneth Sassen
2004-08-23
Reviewed here is the history of the participation in the Atmospheric Radiation Measurement (ARM) Program, with particular emphasis on research performed between 1999 and 2002, before the PI moved from the University of Utah to the University of Alaska, Fairbanks. The research results are divided into the following areas: IOP research, remote sensing algorithm development using datasets and models, cirrus cloud and SCM/GCM parameterizations, student training, and publications.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Electron scattering and mobility in a quantum well heterolayer
NASA Astrophysics Data System (ADS)
Arora, Vijay K.; Naeem, Athar
1984-11-01
The theory of electron-lattice scattering is analyzed for a quantum-well heterolayer under the conditions that the de Broglie wavelength of an electron is comparable to or larger than the width of the layer, and donor impurities are removed in an adjacent nonconducting layer. The mobility due to isotropic scattering by acoustic phonons, point defects, and alloy scattering is found to increase whereas that due to polar-optic phon scattering is found to decrease with increasing thickness.
Optimizing EDMF parameterization for stratocumulus-topped boundary layer
NASA Astrophysics Data System (ADS)
Jones, C. R.; Bretherton, C. S.; Witek, M. L.; Suselj, K.
2014-12-01
We present progress in the development of an Eddy Diffusion / Mass Flux (EDMF) turbulence parameterization, with the goal of improving the representation of the cloudy boundary layer in NCEP's Global Forecast System (GFS), as part of a multi-institution Climate Process Team (CPT). Current GFS versions substantially under-predict cloud amount and cloud radiative impact over much of the globe, leading to large biases in the surface and top of atmosphere energy budgets. As part of the effort to correct these biases, the CPT is developing a new EDMF turbulence scheme for GFS, in which local turbulent mixing is represented by an eddy diffusion term while nonlocal shallow convection is represented by a mass flux term. The sum of both contributions provides the total turbulent flux. Our goal is for this scheme to more skillfully simulate cloud radiative properties without negatively impacting other measures of weather forecast skill. One particular challenge faced by an EDMF parameterization is to be able to handle stratocumulus regimes as well as shallow cumulus regimes. In order to isolate the behavior of the proposed EDMF parameterization and aid in its further development, we have implemented the scheme in a portable MATLAB single column model (SCM). We use this SCM framework to optimize the simulation of stratocumulus cloud top entrainment and boundary layer decoupling.
Data-driven RBE parameterization for helium ion beams.
Mairani, A; Magro, G; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T
2016-01-21
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter (α/β)ph of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the RBEα = αHe/αph and Rβ = βHe/βph ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (RBE10) are compared with the experimental ones. Pearson's correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with (α/β)ph = 5.4 Gy at the entrance of a 56.4 MeV u(-1)He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and (α/β)ph as input parameters is proposed, allowing a straightforward implementation in a TP system.
Synthesis of Entrainment and Detrainment formulations for Convection Parameterizations
NASA Astrophysics Data System (ADS)
Siebesma, P.
2015-12-01
Mixing between convective clouds and its environment, usually parameterized in terms of entrainment and detrainment, are among the most important processes that determine the strength of the climate model sensitivity. This notion has led to a renaissance of research in exploring the mechanisms of these mixing processes and, as a result, to a wide range of seemingly different parameterized formulations. In this study we are aiming to synthesize these results as to offer a solid framework for use in parameterized formulations of convection. Detailed LES analyses in which clouds are subsampled according to their size show that entrainment rates are inversely proportional to the typical cloud radius, in accordance with original entraining plume models. These results can be shown analytically to be consistent with entrainment rate formulations of cloud ensembles that decrease inversely proportional with height, by making only mild assumptions on the shape of the associated cloud size distribution. In addition there are additional dependencies of the entrainment rates on the environmental thermodynamics such as the relative humidity and stability but these are of second order. In contrast detrainment rates do depend to first order on the environmental thermodynamics such as relative humidity and stability. This can be understood by realizing that i) the details of the cloud size distribution do depend on these environmental factors and ii) that detrainment rates have a much stronger dependency on the shape of the cloud size distribution than entrainment rates.
Does convective aggregation need to be represented in cumulus parameterizations?
NASA Astrophysics Data System (ADS)
Tobin, Isabelle; Bony, Sandrine; Holloway, Chris E.; Grandpeix, Jean-Yves; Sèze, Geneviève; Coppin, David; Woolnough, Steve J.; Roca, Rémy
2013-12-01
Tropical deep convection exhibits a variety of levels of aggregation over a wide range of scales. Based on a multisatellite analysis, the present study shows at mesoscale that different levels of aggregation are statistically associated with differing large-scale atmospheric states, despite similar convective intensity and large-scale forcings. The more aggregated the convection, the dryer and less cloudy the atmosphere, the stronger the outgoing longwave radiation, and the lower the planetary albedo. This suggests that mesoscale convective aggregation has the potential to affect couplings between moisture and convection and between convection, radiation, and large-scale ascent. In so doing, aggregation may play a role in phenomena such as "hot spots" or the Madden-Julian Oscillation. These findings support the need for the representation of mesoscale organization in cumulus parameterizations; most parameterizations used in current climate models lack any such representation. The ability of a cloud system-resolving model to reproduce observed relationships suggests that such models may be useful to guide attempts at parameterizations of convective aggregation.
Inverse groundwater modeling with emphasis on model parameterization
NASA Astrophysics Data System (ADS)
Kourakos, George; Mantoglou, Aristotelis
2012-05-01
This study develops an inverse method aiming to circumvent the subjective decision regarding model parameterization and complexity in inverse groundwater modeling. The number of parameters is included as a decision variable along with parameter values. A parameterization based on B-spline surfaces (BSS) is selected to approximate transmissivity, and genetic algorithms were selected to perform error minimization. A transform based on linear least squares (LLS) is developed, so that different parameterizations may be combined by standard genetic algorithm operators. First, three applications, with isotropic, anisotropic, and zoned aquifer parameters, are examined in a single objective optimization problem and the estimated transmissivity is found to be near the true one. Interestingly, in the anisotropic case, the algorithm converged to a solution with an anisotropic distribution of control points. Next, a single objective optimization with regularization, penalizing complex models, is considered, and last, the problem is expressed in a multiobjective optimization framework (MOO), where the goals are simultaneous minimization of calibration error and model complexity. The result of MOO is a Pareto set of potential solutions where the user can examine the tradeoffs between calibration error and model complexity and select the most suitable model. By comparing calibration with prediction errors, it appears, that the most promising models are the ones near a region where the rate of decrease of calibration error as model complexity increases drops (bend of error curve). This is a useful result of practical interest in real inverse modeling applications.
UQ-Guided Selection of Physical Parameterizations in Climate Models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Debusschere, B.; Ghan, S.; Rosa, D.; Bulaevskaya, V.; Anderson, G. J.; Chowdhary, K.; Qian, Y.; Lin, G.; Larson, V. E.; Zhang, G. J.; Randall, D. A.
2015-12-01
Given two or more parameterizations that represent the same physical process in a climate model, scientists are sometimes faced with difficult decisions about which scheme to choose for their simulations and analysis. These decisions are often based on subjective criteria, such as "which scheme is easier to use, is computationally less expensive, or produces results that look better?" Uncertainty quantification (UQ) and model selection methods can be used to objectively rank the performance of different physical parameterizations by increasing the preference for schemes that fit observational data better, while at the same time penalizing schemes that are overly complex or have excessive degrees-of-freedom. Following these principles, we are developing a perturbed-parameter UQ framework to assist in the selection of parameterizations for a climate model. Preliminary results will be presented on the application of the framework to assess the performance of two alternate schemes for simulating tropical deep convection (CLUBB-SILHS and ZM-trigmem) in the U.S. Dept. of Energy's ACME climate model. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, is supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC), and is released as LLNL-ABS-675799.
New parameterizations and sensitivities for simple climate models
NASA Technical Reports Server (NTRS)
Graves, Charles E.; Lee, Wan-Ho; North, Gerald R.
1993-01-01
This paper presents a reexamination of the earth radiation budget parameterization of energy balance climate models in light of data collected over the last 12 years. The study consists of three parts: (1) an examination of the infrared terrestrial radiation to space and its relationship to the surface temperature field on time scales from 1 month to 10 years; (2) an examination of the albedo of the earth with special attention to the seasonal cycle of snow and clouds; (3) solutions for the seasonal cycle using the new parameterizations with special attention to changes in sensitivity. While the infrared parameterization is not dramatically different from that used in the past, the albedo in the new data suggest that a stronger latitude dependence be employed. After retuning the diffusion coefficient the simulation results for the present climate generally show only a slight dependence on the new parameters. Also, the sensitivity parameter for the model is still about the same (1.25 C for a 1 percent increase of solar constant) for the linear models and for the nonlinear models that include a seasonal snow line albedo feedback (1.34 C). One interesting feature is that a clear-sky planet with a snow line albedo feedback has a significantly higher sensitivity (2.57 C) due to the absence of smoothing normally occurring in the presence of average cloud cover.
A satellite observation test bed for cloud parameterization development
NASA Astrophysics Data System (ADS)
Lebsock, M. D.; Suselj, K.
2015-12-01
We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.
... infection such as redness, swelling, pain, or drainage. Wire cutting method: First, wash your hands with soap ... cut it off just behind the barb with wire cutters. Remove the rest of the hook by ...
... and medical centers are doing this surgery using robots . Why the Procedure is Performed Kidney removal may ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...
... are small, insect-like creatures that live in woods and fields. They attach to you as you ... your clothes and skin often while in the woods. After returning home: Remove your clothes. Look closely ...
... incision. Extracapsular extraction: The doctor uses a small tool to remove the cataract in mostly one piece. This procedure uses a larger incision. Laser surgery: The doctor guides a machine that uses laser energy to make the incisions ...
... There are several tick removal devices on the market, but a plain set of fine-tipped tweezers ... for Disease Control and Prevention National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Division of Vector- ...
Burris, Katy; Kim, Karen
2007-01-01
Tattoos have been a part of costume, expression, and identification in various cultures for centuries. Although tattoos have become more popular in western culture, many people regret their tattoos in later years. In this situation, it is important to be aware of the mechanisms of tattoo removal methods available, as well as their potential short- and long-term effects. Among the myriad of options available, laser tattoo removal is the current treatment of choice, given its safety and efficacy.
Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe
2011-01-01
Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal.
A parameterization method and application in breast tomosynthesis dosimetry
Li, Xinhua; Zhang, Da; Liu, Bob
2013-09-15
Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in
Liou, K. N.; Takano, Y.; He, Cenlin; Yang, P.; Leung, Lai-Yung R.; Gu, Y.; Lee, W- L.
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.
NASA Technical Reports Server (NTRS)
Qiu, Jinhuan; Huang, Qirong
1992-01-01
The study of the inversion algorithm for the single scatter lidar equation, for quantitative determination of cloud (or aerosol) optical properties, has received much attention over the last thirty years. Some of the difficulties associated with the solution of this equation are not yet solved. One problem is that a single scatter lidar equation has two unknowns. Because of this, the determination of the far-end boundary value, in the case of Klett's algorithm, is a problem if the atmosphere is optically inhomogeneous. Another difficulty concerns multiple scattering. There is a large error in the extinction distribution solution, in many cases, if only the single scattering component is considered, while neglecting the multiple scattering component. However, the use of multiple scattering in the remote sensing of aerosol or cloud optical properties is promising. In our early study, an inversion method for simultaneous determination of the cloud (or aerosol) Extinction Coefficient Distribution (ECD) and its Forward Scattering Phase Function (FSPF) was proposed according to multiply scattered lidar returns with two fields of view for the receiver. The method is based on a parameterized multiple scatter lidar equation. This paper is devoted to further numerical tests and an experimental study of lidar measurements of cloud ECD and FSPF using this method.
Strauss, Keith J; Racadio, John M; Abruzzo, Todd A; Johnson, Neil D; Patel, Manish N; Kukreja, Kamlesh U; den Hartog, Mark J H; Hoonaert, Bart P A; Nachabe, Rami A
2015-09-08
The purpose of this study was to reduce pediatric doses while maintaining or improv-ing image quality scores without removing the grid from X-ray beam. This study was approved by the Institutional Animal Care and Use Committee. Three piglets (5, 14, and 20 kg) were imaged using six different selectable detector air kerma (Kair) per frame values (100%, 70%, 50%, 35%, 25%, 17.5%) with and without the grid. Number of distal branches visualized with diagnostic confidence relative to the injected vessel defined image quality score. Five pediatric interventional radiologists evaluated all images. Image quality score and piglet Kair were statistically compared using analysis of variance and receiver operating curve analysis to define the preferred dose setting and use of grid for a visibility of 2nd and 3rd order vessel branches. Grid removal reduced both dose to subject and imaging quality by 26%. Third order branches could only be visualized with the grid present; 100% detector Kair was required for smallest pig, while 70% detector Kair was adequate for the two larger pigs. Second order branches could be visualized with grid at 17.5% detector Kair for all three pig sizes. Without the grid, 50%, 35%, and 35% detector Kair were required for smallest to largest pig, respectively. Grid removal reduces both dose and image quality score. Image quality scores can be maintained with less dose to subject with the grid in the beam as opposed to removed. Smaller anatomy requires more dose to the detector to achieve the same image quality score.
High-precision positioning of radar scatterers
NASA Astrophysics Data System (ADS)
Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.
2016-05-01
Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.
Investigation of scattering in lunar seismic coda
NASA Astrophysics Data System (ADS)
Blanchette-Guertin, J.-F.; Johnson, C. L.; Lawrence, J. F.
2012-06-01
We investigate the intrinsic attenuation and scattering properties of the Moon by parameterizing the coda decay of 369 higher-quality lunar seismograms from 72 events via their characteristic rise and decay times. We investigate any dependence of the decay times on source type, frequency, and epicentral distance. Intrinsic attenuation, scattering, and possible focusing of energy in a near-surface, low-velocity layer all contribute to the coda decay. Although it is not possible to quantify the exact contribution of each of these effects in the seismograms, results suggest that scattering in a near-surface global layer dominates the records of shallow events (˜0-200 km depth), particularly at frequencies above 2 Hz, and for increasing epicentral distance. We propose that the scattering layer is the megaregolith and that energy from shallow sources encounters more scatterers as it travels longer distances in the layer, increasing the coda decay times. A size distribution of ejecta blocks that has more small-scale than large-scale scatterers intensifies this effect for increasing frequencies. Deep moonquakes (700-1100 km depth) exhibit no dependence of the decay time on epicentral distance. We suggest that because of their large depths and small amplitudes, deep moonquakes from any distance sample a similar region near a given receiver. Near-station structure and geology may also control the decay times of local events, as evidenced by two natural impact records. This study provides constraints and testable hypotheses for waveform modeling of the lunar interior that includes the effects of intense scattering and shallow, low-velocity layers.
Rayleigh scattering. [molecular scattering terminology redefined
NASA Technical Reports Server (NTRS)
Young, A. T.
1981-01-01
The physical phenomena of molecular scattering are examined with the objective of redefining the confusing terminology currently used. The following definitions are proposed: molecular scattering consists of Rayleigh and vibrational Raman scattering; the Rayleigh scattering consists of rotational Raman lines and the central Cabannes line; the Cabannes line is composed of the Brillouin doublet and the central Gross or Landau-Placzek line. The term 'Rayleigh line' should never be used.
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays.
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-05-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the 'kinome' at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model's two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed.
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-01-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the ‘kinome’ at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model’s two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed. PMID:27601856
Aerosol hygroscopic growth parameterization based on a solute specific coefficient
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.
2011-09-01
Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a (3)He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the (3)He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Longwave radiation parameterization for UCLA/GLAS GCM
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Corsetti, T.
1984-01-01
This document describes the parameterization of longwave radiation in the UCLA/GLAS general circulation model. Transmittances have been computed from the work of Arking and Chou for water vapor and carbon dioxide and ozone absorptances are computed using a formula due to Rodgers. Cloudiness has been introduced into the code in a manner in which fractional cover and random or maximal overlap can be accommodated. The entire code has been written in a form that is amenable to vectorization on CYBER and CRAY computers. Sample clear sky computations for five standard profiles using the 15- and 9-level versions of the model have been included.
Modeling and parameterization of horizontally inhomogeneous cloud radiative properties
NASA Technical Reports Server (NTRS)
Welch, R. M.
1995-01-01
One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.
Parameterization of interatomic potential by genetic algorithms: A case study
Ghosh, Partha S. Arya, A.; Dey, G. K.; Ranawat, Y. S.
2015-06-24
A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.
CCPP-ARM Parameterization Testbed Model Forecast Data
Klein, Stephen
2008-01-15
Dataset contains the NCAR CAM3 (Collins et al., 2004) and GFDL AM2 (GFDL GAMDT, 2004) forecast data at locations close to the ARM research sites. These data are generated from a series of multi-day forecasts in which both CAM3 and AM2 are initialized at 00Z every day with the ECMWF reanalysis data (ERA-40), for the year 1997 and 2000 and initialized with both the NASA DAO Reanalyses and the NCEP GDAS data for the year 2004. The DOE CCPP-ARM Parameterization Testbed (CAPT) project assesses climate models using numerical weather prediction techniques in conjunction with high quality field measurements (e.g. ARM data).
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Magic neutrino mass matrix and the Bjorken Harrison Scott parameterization
NASA Astrophysics Data System (ADS)
Lam, C. S.
2006-09-01
Observed neutrino mixing can be described by a tribimaximal MNS matrix. The resulting neutrino mass matrix in the basis of a diagonal charged lepton mass matrix is both 2-3 symmetric and magic. By a magic matrix, I mean one whose row sums and column sums are all identical. I study what happens if 2-3 symmetry is broken but the magic symmetry is kept intact. In that case, the mixing matrix is parameterized by a single complex parameter Ue 3, in a form discussed recently by Bjorken, Harrison, and Scott.
Haedersdal, Merete; Haak, Christina S
2011-01-01
Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair, white skin, and a normal hormonal status. Currently, no method of lifelong permanent hair eradication is available, and it is important that patients have realistic expectations. Substantial evidence has been found for short-term hair removal efficacy of up to 6 months after treatment with the available systems. Evidence has been found for long-term hair removal efficacy beyond 6 months after repetitive treatments with alexandrite, diode, and long-pulse Nd:YAG lasers, whereas the current long-term evidence is sparse for IPL devices. Treatment parameters must be adjusted to patient skin type and chromophore. Longer wavelengths and cooling are safer for patients with darker skin types. Hair removal with lasers and IPL sources are generally safe treatment procedures when performed by properly educated operators. However, safety issues must be addressed since burns and adverse events do occur. New treatment procedures are evolving. Consumer-based treatments with portable home devices are rapidly evolving, and presently include low-level diode lasers and IPL devices.
NASA Astrophysics Data System (ADS)
Yang, Z.
2011-12-01
Noah-MP, which improves over the standard Noah land surface model, is unique among all land surface models in that it has multi-parameterization options (hence Noah-MP), capable of producing thousands of parameterization schemes, in addition to its improved physical realism (multi-layer snowpack, groundwater dynamics, and vegetation dynamics). All these features are critical for ensemble hydrological simulations and climate predictions at intraseasonal to decadal timescales. This talk will focus on evaluation of the Noah-MP simulations of energy, water and carbon balances for different sub-basins in the Mississippi River in comparison with various observations. The analysis is performed on daily and monthly scales spanning from January 2000 to December 2009. We will show how different runoff schemes in Noah-MP affect the scatter patterns between runoff and water table depth and between gross primary productivity and total water storage change, a type of analysis that would help us identify the relationships between key water storage terms (groundwater, soil moisture, snow) and fluxes (GPP, sensible heat, evapotranspiration, runoff). Similarly, we want to see how other options affect the patterns, such as the beta parameter (i.e. the soil moisture parameter controlling transpiration of plants), the Ball-Berry and Jarvis options for stomatal resistance, and the dynamic vegetation options (on or off). We will compare the water storage simulations from Noah-MP, observations and other model estimates, which would help determine the strengths and limitations of the Noah-MP groundwater and hydrological schemes.
A Parameterization for the Triggering of Landscape Generated Moist Convection
NASA Technical Reports Server (NTRS)
Lynn, Barry H.; Tao, Wei-Kuo; Abramopoulos, Frank
1998-01-01
A set of relatively high resolution three-dimensional (3D) simulations were produced to investigate the triggering of moist convection by landscape generated mesoscale circulations. The local accumulated rainfall varied monotonically (linearly) with the size of individual landscape patches, demonstrating the need to develop a trigger function that is sensitive to the size of individual patches. A new triggering function that includes the effect of landscapes generated mesoscale circulations over patches of different sizes consists of a parcel's perturbation in vertical velocity (nu(sub 0)), temperature (theta(sub 0)), and moisture (q(sub 0)). Each variable in the triggering function was also sensitive to soil moisture gradients, atmospheric initial conditions, and moist processes. The parcel's vertical velocity, temperature, and moisture perturbation were partitioned into mesoscale and turbulent components. Budget equations were derived for theta(sub 0) and q(sub 0). Of the many terms in this set of budget equations, the turbulent, vertical flux of the mesoscale temperature and moisture contributed most to the triggering of moist convection through the impact of these fluxes on the parcel's temperature and moisture profile. These fluxes needed to be parameterized to obtain theta(sub 0) and q(sub 0). The mesoscale vertical velocity also affected the profile of nu(sub 0). We used similarity theory to parameterize these fluxes as well as the parcel's mesoscale vertical velocity.
Evaluation of a New Parameterization for Fair-Weather Cumulus
Berg, Larry K.; Stull, Roland B.
2006-05-25
A new parameterization for boundary layer cumulus clouds, called the cumulus potential (CuP) scheme, is introduced. This scheme uses joint probability density functions (JPDFs) of virtual potential temperature and water-vapor mixing ratio, as well as the mean vertical profiles of virtual potential temperature, to predict the amount and size distribution of boundary layer cloud cover. This model considers the diversity of air parcels over a heterogeneous surface, and recognizes that some parcels rise above their lifting condensation level to become cumulus, while other parcels might rise as clear updrafts. This model has several unique features: 1) surface heterogeneity is represented using the boundary layer JPDF of virtual potential temperature versus water-vapor mixing ratio, 2) clear and cloudy thermals are allowed to coexist at the same altitude, and 3) a range of cloud-base heights, cloud-top heights, and cloud thicknesses are predicted within any one cloud field, as observed. Using data from Boundary Layer Experiment 1996 and a model intercomparsion study using large eddy simulation (LES) based on Barbados Oceanographic and Meteorological Experiment (BOMEX), it is shown that the CuP model does a good job predicting cloud-base height and cloud-top height. The model also shows promise in predicting cloud cover, and is found to give better cloud-cover estimates than three other cumulus parameterizations: one based on relative humidity, a statistical scheme based on the saturation deficit, and a slab model.
A bulk cloud parameterization in a Venus General Circulation Model
NASA Astrophysics Data System (ADS)
Lee, Christopher; Lewis, Stephen R.; Read, Peter L.
2010-04-01
A condensing cloud parameterization is included in a super-rotating Venus General Circulation Model. A parameterization including condensation, evaporation and sedimentation of mono-modal sulfuric acid cloud particles is described. Saturation vapor pressure of sulfuric acid vapor is used to determine cloud formation through instantaneous condensation and destruction through evaporation, while pressure dependent viscosity of a carbon dioxide atmosphere is used to determine sedimentation rates assuming particles fall at their terminal Stokes velocity. Modifications are described to account for the large range of the Reynolds number seen in the Venus atmosphere. Two GCM experiments initialized with 10 ppm-equivalent of sulfuric acid are integrated for 30 Earth years and the results are discussed with reference to "Y" shaped cloud structures observed on Venus. The GCM is able to produce an analog of the "Y" shaped cloud structure through dynamical processes alone, with contributions from the mean westward wind, the equatorial Kelvin wave, and the mid-latitude/polar Mixed Rossby/Gravity waves. The cloud top height in the GCM decreases from equator to pole and latitudinal gradients of cloud top height are comparable to those observed by Pioneer Venus and Venus Express, and those produced in more complex microphysical models of the sulfur cycle on Venus. Differences between the modeled cloud structures and observations are described and dynamical explanations are suggested for the most prominent differences.
Rapid parameterization of small molecules using the Force Field Toolkit
Mayne, Christopher G.; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C.
2013-01-01
The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics (MD) simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, e.g., GAFF and CGenFF, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, set up multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). PMID:24000174
The impact of forest architecture parameterization on GPP simulations
NASA Astrophysics Data System (ADS)
Firanj, Ana; Lalic, Branislava; Podrascanin, Zorica
2015-08-01
The presence of a forest strongly affects ecosystem fluxes by acting as a source or sink of mass and energy. The objective of this study was to investigate the influence of the vertical forest heterogeneity parameterization on gross primary production (GPP) simulations. To introduce a heterogeneity effect, a new method for the upscaling of the leaf level GPP is proposed. This upscaling method is based on the relationship between the leaf area index ( LAI) and the leaf area density ( LAD) profiles and the standard sun/shade leaf separation method. The effect of the crown shape and foliage distribution parameterization on the simulated GPP is confirmed in a comparison study between the proposed method and the standard sun/shade upscaling method. The observed values used in the comparison study are assimilated during the vegetation period on three distinguished forest eddy-covariance (EC) measurement sites chosen for the diversity of their morphological characteristics. The obtained results show (a) the sensitivity of the simulated GPP to the leaf area density profile, (b) the capability of the proposed scaling method to calculate the contribution of the different canopy layers to the entire canopy GPP, and (c) a better agreement with the observations of the simulated GPP with the proposed upscaling method compared with the standard sun/shade method.
Transient Storage Parameterization of Wetland-dominated Stream Reaches
NASA Astrophysics Data System (ADS)
Wilderotter, S. M.; Lightbody, A.; Kalnejais, L. H.; Wollheim, W. M.
2014-12-01
Current understanding of the importance of transient storage in fluvial wetlands is limited. Wetlands that have higher connectivity to the main stream channel are important because they have the potential to retain more nitrogen within the river system than wetlands that receive little direct stream discharge. In this study, we investigated how stream water accesses adjacent fluvial wetlands in New England coastal watersheds to improve parameterization in network-scale models. Break through curves of Rhodamine WT were collected for eight wetlands in the Ipswich and Parker (MA) and Lamprey River (NH) watersheds, USA. The curves were inverse modeled using STAMMT-L to optimize the connectivity and size parameters for each reach. Two approaches were tested, a single dominant storage zone and a range of storage zones represented using a power-law distribution of storage zone connectivity. Multiple linear regression analyses were conducted to relate transient storage parameters to stream discharge, area, length-to-width ratio, and reach slope. Resulting regressions will enable more accurate parameterization of surface water transient storage in network-scale models.
LES of wind turbine wakes: Evaluation of turbine parameterizations
NASA Astrophysics Data System (ADS)
Porte-Agel, Fernando; Wu, Yu-Ting; Chamorro, Leonardo
2009-11-01
Large-eddy simulation (LES), coupled with a wind-turbine model, is used to investigate the characteristics of wind turbine wakes in turbulent boundary layers under different thermal stratification conditions. The subgrid-scale (SGS) stress and SGS heat flux are parameterized using scale-dependent Lagrangian dynamic models (Stoll and Porte-Agel, 2006). The turbine-induced lift and drag forces are parameterized using two models: an actuator disk model (ADM) that distributes the force loading on the rotor disk; and an actuator line model (ALM) that distributes the forces on lines that follow the position of the blades. Simulation results are compared to wind-tunnel measurements collected with hot-wire and cold-wire anemometry in the wake of a miniature 3-blade wind turbine at the St. Anthony Falls Laboratory atmospheric boundary layer wind tunnel. In general, the characteristics of the wakes simulated with the proposed LES framework are in good agreement with the measurements. The ALM is better able to capture vortical structures induced by the blades in the near-wake region. Our results also show that the scale-dependent Lagrangian dynamic SGS models are able to account, without tuning, for the effects of local shear and flow anisotropy on the distribution of the SGS model coefficients.
An updated subgrid orographic parameterization for global atmospheric forecast models
NASA Astrophysics Data System (ADS)
Choi, Hyun-Joo; Hong, Song-You
2015-12-01
A subgrid orographic parameterization (SOP) is updated by including the effects of orographic anisotropy and flow-blocking drag (FBD). The impact of the updated SOP on short-range forecasts is investigated using a global atmospheric forecast model applied to a heavy snowfall event over Korea on 4 January 2010. When the SOP is updated, the orographic drag in the lower troposphere noticeably increases owing to the additional FBD over mountainous regions. The enhanced drag directly weakens the excessive wind speed in the low troposphere and indirectly improves the temperature and mass fields over East Asia. In addition, the snowfall overestimation over Korea is improved by the reduced heat fluxes from the surface. The forecast improvements are robust regardless of the horizontal resolution of the model between T126 and T510. The parameterization is statistically evaluated based on the skill of the medium-range forecasts for February 2014. For the medium-range forecasts, the skill improvements of the wind speed and temperature in the low troposphere are observed globally and for East Asia while both positive and negative effects appear indirectly in the middle-upper troposphere. The statistical skill for the precipitation is mostly improved due to the improvements in the synoptic fields. The improvements are also found for seasonal simulation throughout the troposphere and stratosphere during boreal winter.
Bulk Parameterization of the Snow Field in a Cloud Model.
NASA Astrophysics Data System (ADS)
Lin, Yuh-Lang; Farley, Richard D.; Orville, Harold D.
1983-06-01
A two-dimensional, time-dependent cloud model has been used to simulate a moderate intensity thunderstorm for the High Plains region. Six forms of water substance (water vapor, cloud water, cloud ice, rain, snow and hail, i.e., graupel) are simulated. The model utilizes the `bulk water' microphysical parameterization technique to represent the precipitation fields which are all assumed to follow exponential size distribution functions. Autoconversion concepts are used to parameterize the collision-coalescence and collision-aggregation processes. Accretion processes involving the various forms of liquid and solid hydrometeors are simulated in this model. The transformation of cloud ice to snow through autoconversion (aggregation) and Bergeron process and subsequent accretional growth or aggregation to form hail are simulated. Hail is also produced by various contact mechanisms and via probabilistic freezing of raindrops. Evaporation (sublimation) is considered for all precipitation particles outside the cloud. The melting of hail and snow are included in the model. Wet and dry growth of hail and shedding of rain from hail are simulated.The simulations show that the inclusion of snow has improved the realism of the results compared to a model without snow. The formation of virga from cloud anvils is now modeled. Addition of the snow field has resulted in the inclusion of more diverse and physically sound mechanisms for initiating the hail field, yielding greater potential for distinguishing dominant embryo types characteristically different from warm- and cold-based clouds.
A parameterized model for global insolation under partially cloudy skies
NASA Technical Reports Server (NTRS)
Choudhury, B.
1982-01-01
A simple and efficient parameterization of insolation under partially cloudy skies is discussed and compared with a set of exact radiative transfer results for clear skies, an empirical equation and observations. The parameterization is physically based and requires, as input variables, the ozone path length, precipitable water, Angstrom turbidity, surface air pressure and albedo, fractional cloud-cover and cloud thickness. Multiple reflection between the surface and the overlying atmosphere, and clouds are considered. The albedo of the earth-atmosphere system is also formulated and compared with a set of exact radiative transfer results. As compared to the exact radiative transfer results, the errors in the insolations are generally less than 1 percent, and in the albedo of the earth-atmosphere system less than 10 percent. The errors in the calculated insolations using climatological data are 2-3 percent when compared with many years averaged observations at Maudheim (Antarctica) and at Rockville (U.S.A.). A parametric equation for calculating directly the daily total insolation is also given.
Parameterizing moisture in glacier debris cover using a bucket scheme
NASA Astrophysics Data System (ADS)
Collier, Emily; Nicholson, Lindsey I.; Maussion, Fabien; Mölg, Thomas
2013-04-01
Due to the complexity of treating moisture in supraglacial debris cover, full surface energy balance models to date have neglected both moisture fluxes and phase changes in the debris layer. However, the presence of liquid and frozen water has an important influence on the thermal properties of the debris layer. In addition, large spikes in the latent heat flux over supraglacial debris have been measured, suggesting that neglecting this flux in a surface energy balance calculation may be an inaccurate assumption under certain meteorological conditions. Here, we explore the utility of a bucket scheme for parameterizing moisture fluxes and phase changes in a glacier debris layer. The bucket scheme simulates infiltration of liquid water into pore spaces in the debris cover. The thermal properties of the debris cover, which partially determine the energy flux to the underlying ice, are then computed as a function of the water content and phase. We employ the bucket parameterization in a high-resolution, physically-based, and integrated atmosphere-glacier mass balance model to quantify the importance of moisture on the surface energy and mass balance of debris-covered glaciers through an application over the Karakoram region of the northwestern Himalaya.
Comparison of parameterizations for homogeneous and heterogeneous ice nucleation
NASA Astrophysics Data System (ADS)
Koop, T.; Zobrist, B.
2009-04-01
The formation of ice particles from liquid aqueous aerosols is of central importance for the physics and chemistry of high altitude clouds. In this paper, we present new laboratory data on ice nucleation and compare them with two different parameterizations for homogeneous as well as heterogeneous ice nucleation. In particular, we discuss and evaluate the effect of solutes and ice nuclei. One parameterization is the Î»-approach which correlates the depression of the freezing temperature of aqueous droplets in comparison to pure water droplets, Tf, with the corresponding depression, Tm, of the equilibrium ice melting point: Tf = Î» × Tm. Here, Î» is independent of concentration and a constant that is specific for a particular solute or solute/ice nucleus combination. The other approach is water-activity-based ice nucleation theory which describes the effects of solutes on the freezing temperature Tf via their effect on water activity: aw(Tf) = awi(Tf) + aw. Here, awi is the water activity of ice and aw is a constant that depends on the ice nucleus but is independent of the type of solute. We present new data on both homogeneous and heterogeneous ice nucleation with varying types of solutes and ice nuclei. We evaluate and discuss the advantages and limitations of the two approaches for the prediction of ice nucleation in laboratory experiments and atmospheric cloud models.
NASA Astrophysics Data System (ADS)
Koch, D.; Bond, T.; Kinne, S.; Klimont, Z.; Sun, H.; van Aardenne, J.; van der Werf, G.
2006-12-01
Estimates of human influence on climate are especially hindered by poor constraint on the amount of anthropogenic carbonaceous aerosol absorption in the atmosphere. Coordination of observation and model analyses attempt to constrain particle absorption amount, however these are limited by uncertainties in aerosol emission estimates, model scavenging parameterization, aerosol size assumption, contributions from organic aerosol absorption, air concentration observational techniques and by sparsity of data coverage. We perform multiple simulations using GISS modelE and six present-day emission estimates for black carbon (BC) and organic carbon (OC) (Bond et al 2004 middle and upper estimates, IIASA, EDGAR, GFED v1 and v2); for one of these emissions we apply 4 different BC/OC scavenging parameterizations. The resulting concentrations will be compared with a new compilation of observed BC/OC concentrations. We then use these model concentrations, together with effective radius assumptions and estimates of OC absorption to calculate a range of carbonaceous aerosol absorption. We constrain the wavelength-dependent model τ- absorption with AERONET sun-photometer observations. We will discuss regions, seasons and emission sectors with greatest uncertainty, including those where observational constraint is lacking. We calculate the range of model radiative forcing from our simulations and discuss the degree to which it is constrained by observations.
Evaluation of an Urban Canopy Parameterization in a Mesoscale Model
Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J
2004-03-18
A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.
A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes
J. O. Pinto, A.H. Lynch
2005-12-14
The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles.
Total Cross Section Parameterizations for Pion Production in Nucleon-Nucleon Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.
2008-01-01
Total cross section parameterizations for neutral and charged pion production in nucleon-nuelcon collisions are compared to an extensive set of experimental data over the projectile momentum range from threshold to 300 GeV. Both proton-proton and proton-neutron reactions are considered. Good agreement between parameterizations and experiment is found, and therefore the parameterizations will be useful for applications, such as transport codes.
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh; Bacmeister, Julio; Feingold, Graham; Lee, Seoung-soo; Williams, Christopher
2016-09-14
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land. The resulting model will be compared with ARM observations.
Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai
2015-01-01
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmospheric Model version 5.3 (CAM5.3), the effects of preexisting ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of cirrus cloud rather than in the whole area of cirrus cloud. With these improvements, the two unphysical limiters used in the representation of ice nucleation are removed. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The preexisting ice crystals significantly reduce ice number concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably.Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and preexisting ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24×106 m-2) is obviously less than that from the LP (8.46×106 m-2) and BN (5.62×106 m-2) parameterizations. As a result, experiment using the KL parameterization predicts a much smaller anthropogenic aerosol longwave indirect forcing (0.24 W m-2) than that using the LP (0.46 W m-2
Jacobian transformed and detailed balance approximations for photon induced scattering
NASA Astrophysics Data System (ADS)
Wienke, B. R.; Budge, K. G.; Chang, J. H.; Dahl, J. A.; Hungerford, A. L.
2012-01-01
Photon emission and scattering are enhanced by the number of photons in the final state, and the photon transport equation reflects this in scattering-emission kernels and source terms. This is often a complication in both theoretical and numerical analyzes, requiring approximations and assumptions about background and material temperatures, incident and exiting photon energies, local thermodynamic equilibrium, plus other related aspects of photon scattering and emission. We review earlier schemes parameterizing photon scattering-emission processes, and suggest two alternative schemes. One links the product of photon and electron distributions in the final state to the product in the initial state by Jacobian transformation of kinematical variables (energy and angle), and the other links integrands of scattering kernels in a detailed balance requirement for overall (integrated) induced effects. Compton and inverse Compton differential scattering cross sections are detailed in appropriate limits, numerical integrations are performed over the induced scattering kernel, and for tabulation induced scattering terms are incorporated into effective cross sections for comparisons and numerical estimates. Relativistic electron distributions are assumed for calculations. Both Wien and Planckian distributions are contrasted for impact on induced scattering as LTE limit points. We find that both transformed and balanced approximations suggest larger induced scattering effects at high photon energies and low electron temperatures, and smaller effects in the opposite limits, compared to previous analyzes, with 10-20% increases in effective cross sections. We also note that both approximations can be simply implemented within existing transport modules or opacity processors as an additional term in the effective scattering cross section. Applications and comparisons include effective cross sections, kernel approximations, and impacts on radiative transport solutions in 1D
SU-E-T-597: Parameterization of the Photon Beam Dosimetry for a Commercial Linear Accelerator
Lebron, S; Lu, B; Yan, G; Kahler, D; Li, J; Barraclough, B; Liu, C
2015-06-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modelled data, (3) a linear accelerator’s (linac) beam characteristics quality assurance process, and (4) establishing a standard data set for data comparison, etcetera. Parameterization of the photon beam dosimetry creates a portable data set that is easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon percentage depth doses(PDD), profiles, and total scatter output factors(Scp). Methods: Scp, PDDs and profiles for different field sizes (from 2×2 to 40×40cm{sup 2}), depths and energies were measured in a linac using a three-dimensional water tank. All data were smoothed and profile data were also centered, symmetrized and geometrically scaled. The Scp and PDD data were analyzed using exponential functions. For modelling of open and wedge field profiles, each side was divided into three regions described by exponential, sigmoid and Gaussian equations. The model’s equations were chosen based on the physical principles described by these dosimetric quantities. The equations’ parameters were determined using a least square optimization method with the minimal amount of measured data necessary. The model’s accuracy was then evaluated via the calculation of absolute differences and distance–to–agreement analysis in low gradient and high gradient regions, respectively. Results: All differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 mm and 0.5 mm, respectively. Differences in the low gradient regions were 0.20 ± 0.20% and 0.50 ± 0.35% for PDDs and profiles, respectively. For Scp data, all differences were less than 0.5%. Conclusion: This novel analytical model with minimum measurement requirements proved to accurately
Criteria and algorithms for spectrum parameterization of MST radar signals
NASA Technical Reports Server (NTRS)
Rastogi, P. K.
1984-01-01
The power spectra S(f) of MST radar signals contain useful information about the variance of refractivity fluctuations, the mean radial velocity, and the radial velocity variance in the atmosphere. When noise and other contaminating signals are absent, these quantities can be obtained directly from the zeroth, first and second order moments of the spectra. A step-by-step procedure is outlined that can be used effectively to reduce large amounts of MST radar data-averaged periodograms measured in range and time to a parameterized form. The parameters to which a periodogram can be reduced are outlined and the steps in the procedure, that may be followed selectively, to arrive at the final set of reduced parameters are given. Examples of the performance of the procedure are given and its use with other radars are commented on.
New particle dependant parameterizations of heterogeneous freezing processes.
NASA Astrophysics Data System (ADS)
Diehl, Karoline; Mitra, Subir K.
2014-05-01
For detailed investigations of cloud microphysical processes an adiabatic air parcel model with entrainment is used. It represents a spectral bin model which explicitly solves the microphysical equations. The initiation of the ice phase is parameterized and describes the effects of different types of ice nuclei (mineral dust, soot, biological particles) in immersion, contact, and deposition modes. As part of the research group INUIT (Ice Nuclei research UnIT), existing parameterizations have been modified for the present studies and new parameterizations have been developed mainly on the basis of the outcome of INUIT experiments. Deposition freezing in the model is dependant on the presence of dry particles and on ice supersaturation. The description of contact freezing combines the collision kernel of dry particles with the fraction of frozen drops as function of temperature and particle size. A new parameterization of immersion freezing has been coupled to the mass of insoluble particles contained in the drops using measured numbers of ice active sites per unit mass. Sensitivity studies have been performed with a convective temperature and dew point profile and with two dry aerosol particle number size distributions. Single and coupled freezing processes are studied with different types of ice nuclei (e.g., bacteria, illite, kaolinite, feldspar). The strength of convection is varied so that the simulated cloud reaches different levels of temperature. As a parameter to evaluate the results the ice water fraction is selected which is defined as the relation of the ice water content to the total water content. Ice water fractions between 0.1 and 0.9 represent mixed-phase clouds, larger than 0.9 ice clouds. The results indicate the sensitive parameters for the formation of mixed-phase and ice clouds are: 1. broad particle number size distribution with high number of small particles, 2. temperatures below -25°C, 3. specific mineral dust particles as ice nuclei such
FSP (Full Space Parameterization), Version 2.0
Fries, G.A.; Hacker, C.J.; Pin, F.G.
1995-10-01
This paper describes the modifications made to FSPv1.0 for the Full Space Parameterization (FSP) method, a new analytical method used to resolve underspecified systems of algebraic equations. The optimized code recursively searches for the necessary number of linearly independent vectors that are necessary to form the solution space. While doing this, it ensures that all possible combinations of solutions are checked, if needed, and handles complications which arise due to particular cases. In addition, two particular cases which cause failure of the FSP algorithm were discovered during testing of this new code. These cases are described in the context of how they are recognized and how they are handled by the new code. Finally, testing was performed on the new code using both isolated movements and complex trajectories for various mobile manipulators.
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point
Sensitivity of liquid clouds to homogenous freezing parameterizations.
Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas
2015-03-16
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at -40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as -30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>-35°C) and low (<-38°C) temperatures.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical
Cirrus parameterization from the FIRE ER-2 observations
NASA Technical Reports Server (NTRS)
Spinhirne, James D.
1990-01-01
Primary goals for the FIRE field experiments were validation of satellite cloud retrievals and study of cloud radiation parameters. The radiometers and lidar observations which were acquired from the NASA ER-2 high altitude aircraft during the FIRE cirrus field study may be applied to derive quantities which would be applicable for comparison to satellite retrievals and to define the cirrus radiative characteristics. The analysis involves parameterization of the vertical cloud distribution and relative radiance effects. An initial case study from the 28 Oct. 1986 cirrus experiment has been carried out, and results from additional experiment days are to be reported. The observations reported are for 1 day. Analysis of the many other cirrus observation cases from the FIRE study show variability of results.
Applying Software Engineering Metrics to Land Surface Parameterization Schemes.
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Henderson-Sellers, B.; Pollard, D.; Verner, J. M.; Pitman, A. J.
1995-05-01
In addition to model validation techniques and intermodel comparison projects, the authors propose the use of software engineering metrics as an additional tool for the enhancement of `quality' in climate models. By discriminating between internal, directly measurable characteristics of structural complexity, and external characteristics, such as maintainability and comprehensibility, a way to benefit climate modeling by the use of easily derivable metrics is explored. As a small illustration, the results of a pilot project are presented. This is a subproject of the Project for Intercomparison of Landsurface Parameterization Schemes in which the authors use some typical structural complexity metrics, namely, for control flow, size, and coupling. Finally, and purely indicatively, the authors compare the results obtained from these metrics with scientists' subjective views of the psychological complexity of the programs.
A New Parameterization Framework for Boundary-Layer Cumuli
Berg, Larry K.; Stull, Roland B.
2005-03-14
The cumulus parameterization framework is called the Cumulus Potential (CuP) scheme. Within this framework, the scheme uses Joint Probability Density Functions (JPDFs) of temperature and moisture and the mean temperature profile to predict the amount and size distribution of fair-weather cumuli. This scheme considers the diversity of air parcels over a heterogeneous surface, and recognizes that some rising parcels become cumuli, while other parcels remain clear updrafts. Once a parcel becomes cloudy, the thermodynamic properties and the exchange of mass between the cloud and environment are calculated using a cloud model within the CuP framework. The primary advantages of the new scheme are the prediction of cloud-base mass flux, cloud cover, and a range of cloud-top heights.
Parameterization of Aerosol Sinks in Chemical Transport Models
NASA Technical Reports Server (NTRS)
Colarco, Peter
2012-01-01
The modelers point of view is that the aerosol problem is one of sources, evolution, and sinks. Relative to evolution and sink processes, enormous attention is given to the problem of aerosols sources, whether inventory based (e.g., fossil fuel emissions) or dynamic (e.g., dust, sea salt, biomass burning). On the other hand, aerosol losses in models are a major factor in controlling the aerosol distribution and lifetime. Here we shine some light on how aerosol sinks are treated in modern chemical transport models. We discuss the mechanisms of dry and wet loss processes and the parameterizations for those processes in a single model (GEOS-5). We survey the literature of other modeling studies. We additionally compare the budgets of aerosol losses in several of the ICAP models.
Parameterization of ion channeling half-angles and minimum yields
NASA Astrophysics Data System (ADS)
Doyle, Barney L.
2016-03-01
A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for axes and [h k l] planes up to (5 5 5). The program is open source and available at
Precisely parameterized experimental and computational models of tissue organization†
Sekar, Rajesh B.; Blake, Robert; Park, JinSeok; Trayanova, Natalia A.; Tung, Leslie; Levchenko, Andre
2016-01-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell–cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and
The Parameterization of All Robust Stabilizing Simple Repetitive Controllers
NASA Astrophysics Data System (ADS)
Yamada, Kou; Sakanushi, Tatsuya; Ando, Yoshinori; Hagiwara, Takaaki; Murakami, Iwanori; Takenaga, Hiroshi; Tanaka, Hiroshi; Matsuura, Shun
The modified repetitive control system is a type of servomechanism for the periodic reference input. That is, the modified repetitive control system follows the periodic reference input with small steady state error, even if a periodic disturbance or an uncertainty exists in the plant. Using previously proposed modified repetitive controllers, even if the plant does not include time-delay, transfer functions from the periodic reference input to the output and from the disturbance to the output have infinite numbers of poles. When transfer functions from the periodic reference input to the output and from the disturbance to the output have infinite numbers of poles, it is difficult to specify the input-output characteristic and the disturbance attenuation characteristic. From the practical point of view, it is desirable that the input-output characteristic and the disturbance attenuation characteristic are easily specified. In order to specify the input-output characteristic and the disturbance attenuation characteristic easily, transfer functions from the periodic reference input to the output and from the disturbance to the output are desirable to have finite numbers of poles. From this viewpoint, Yamada et al. proposed the concept of simple repetitive control systems such that the controller works as a modified repetitive controller and transfer functions from the periodic reference input to the output and from the disturbance to the output have finite numbers of poles. In addition, Yamada et al. clarified the parameterization of all stabilizing simple repetitive controllers. However, the method by Yamada et al. cannot be applied for the plant with uncertainty. The purpose of this paper is to propose the parameterization of all robust stabilizing simple repetitive controllers for the plant with uncertainty.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Systematic Parameterization of Monovalent Ions Employing the Nonbonded Model.
Li, Pengfei; Song, Lin Frank; Merz, Kenneth M
2015-04-14
Monovalent ions play fundamental roles in many biological processes in organisms. Modeling these ions in molecular simulations continues to be a challenging problem. The 12-6 Lennard-Jones (LJ) nonbonded model is widely used to model monovalent ions in classical molecular dynamics simulations. A lot of parameterization efforts have been reported for these ions with a number of experimental end points. However, some reported parameter sets do not have a good balance between the two Lennard-Jones parameters (the van der Waals (VDW) radius and potential well depth), which affects their transferability. In the present work, via the use of a noble gas curve we fitted in former work (J. Chem. Theory Comput. 2013, 9, 2733), we reoptimized the 12-6 LJ parameters for 15 monovalent ions (11 positive and 4 negative ions) for three extensively used water models (TIP3P, SPC/E, and TIP4P(EW)). Since the 12-6 LJ nonbonded model performs poorly in some instances for these ions, we have also parameterized the 12-6-4 LJ-type nonbonded model (J. Chem. Theory Comput. 2014, 10, 289) using the same three water models. The three derived parameter sets focused on reproducing the hydration free energies (the HFE set) and the ion-oxygen distance (the IOD set) using the 12-6 LJ nonbonded model and the 12-6-4 LJ-type nonbonded model (the 12-6-4 set) overall give improved results. In particular, the final parameter sets showed better agreement with quantum mechanically calculated VDW radii and improved transferability to ion-pair solutions when compared to previous parameter sets.
Synthesizing 3D Surfaces from Parameterized Strip Charts
NASA Technical Reports Server (NTRS)
Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri
2004-01-01
We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.
Coupled radiative convective equilibrium simulations with explicit and parameterized convection
NASA Astrophysics Data System (ADS)
Hohenegger, Cathy; Stevens, Bjorn
2016-09-01
Radiative convective equilibrium has been applied in past studies to various models given its simplicity and analogy to the tropical climate. At convection-permitting resolution, the focus has been on the organization of convection that appears when using fixed sea surface temperature (SST). Here the SST is allowed to freely respond to the surface energy. The goals are to examine and understand the resulting transient behavior, equilibrium state, and perturbations thereof, as well as to compare these results to a simulation integrated with parameterized cloud and convection. Analysis shows that the coupling between the SST and the net surface energy acts to delay the onset of self-aggregation and may prevent it, in our case, for a slab ocean of less than 1 m. This is so because SST gradients tend to oppose the shallow low-level circulation that is associated with the self-aggregation of convection. Furthermore, the occurrence of self-aggregation is found to be necessary for reaching an equilibrium state and avoiding a greenhouse-like climate. In analogy to the present climate, the self-aggregation generates the dry and clear subtropics that allow the system to efficiently cool. In contrast, strong shortwave cloud radiative effects, much stronger than at convection-permitting resolution, prevent the simulation with parameterized cloud and convection to fall into a greenhouse state. The convection-permitting simulations also suggest that cloud feedbacks, as arising when perturbing the equilibrium state, may be very different, and in our case less negative, than what emerges from general circulation models.
Parameterization of Infrared Absorption in Midlatitude Cirrus Clouds
Sassen, Kenneth; Wang, Zhien; Platt, C.M.R.; Comstock, Jennifer M.
2003-01-01
Employing a new approach based on combined Raman lidar and millimeter-wave radar measurements and a parameterization of the infrared absorption coefficient {sigma}{sub a}(km{sup -1}) in terms of retrieved cloud microphysics, we derive a statistical relation between {sigma}{sub a} and cirrus cloud temperature. The relations {sigma}{sub a} = 0.3949 + 5.3886 x 10{sup -3} T + 1.526 x 10{sup -5} T{sup 2} for ambient temperature (T,{sup o}C), and {sigma}{sub a} = 0.2896 + 3.409 x 10{sup -3} T{sub m} for midcloud temperature (T{sub m}, {sup o}C), are found using a second order polynomial fit. Comparison with two {sigma}{sub a} versus T{sub m} relations obtained primarily from midlatitude cirrus using the combined lidar/infrared radiometer (LIRAD) approach reveals significant differences. However, we show that this reflects both the previous convention used in curve fitting (i. e., {sigma}{sub a} {yields} 0 at {approx} 80 C), and the types of clouds included in the datasets. Without such constraints, convergence is found in the three independent remote sensing datasets within the range of conditions considered valid for cirrus (i.e., cloud optical depth {approx} 3.0 and T{sub m} < {approx}20 C). Hence for completeness we also provide reanalyzed parameterizations for a visible extinction coefficient {sigma}{sub a} versus T{sub m} relation for midlatitude cirrus, and a data sample involving cirrus that evolved into midlevel altostratus clouds with higher optical depths.
Benchmark analysis of parameterization for terrestrial carbon cycle model (Invited)
NASA Astrophysics Data System (ADS)
Luo, Y.; Zhou, X.; Verburg, P.; Arnone, J.
2010-12-01
Parameterization of terrestrial ecosystem models plays an important role in accurately predicting carbon-climate feedback. More and more studies have shown that a fixed set of parameters cannot adequately represent spatial and temporal variations of ecosystem functions over broad geographical locations and/or over long time. In this study, we conducted benchmark analysis of a terrestrial ecosystem (TECO) model against a highly accurate data set from mesocosm study in Ecologically Controlled Enclosed Lysimeter Laboratories (EcoCELLs) at Desert Research Institute, Reno, Nevada. The mesocosm study involved shoot and whole plant harvests in fall, fallow during winter, and fertilization treatments in year 2. We used a Markov chain Monte Carlo (MCMC) technique to estimate parameters of the TECO model and measure the model performance with estimated parameters. Our analysis showed that the model performance with one set of estimated parameters was poor over a two-year experimental duration. The model performance was slightly improved with root exudation as an additional mechanism of carbon transfer from plants to rhizosphere. The performance was significantly improved when five sets of parameters were estimated for five respective periods, which spanned from seeding to shoot harvest in year 1, from shoot to whole plant harvest in year 1, fallow, from seeding to plant harvest with fertilization in year 2, and from plant harvest to the end of the project in year 2. The five sets of parameter values are significantly different, indicating that experimental treatments caused discontinuous (or discrete) changes in ecosystem processes. The discontinuous changes in ecosystem processes pose significant challenges for carbon cycle model parameterization and generate uncertainties for model prediction.
Stochastic sea ice parameterizations and impacts on polar predictability
NASA Astrophysics Data System (ADS)
Juricke, Stephan; Goessling, Helge; Jung, Thomas
2015-04-01
Stochastic sea ice parameterizations are implemented in a global coupled model to include first estimates of model uncertainty in the assessment of sea ice predictability. The impact of incorporating estimates of model uncertainty in the sea ice dynamics is compared to the impact of atmospheric initial condition uncertainty. In this context a set of ensembles with stochastic sea ice strength perturbations and a set of ensembles with atmospheric initial condition perturbations are investigated. Seasonal integrations show that especially during the first weeks the incorporation of model uncertainty estimates in the sea ice dynamics leads to a significant increase in ensemble spread of sea ice thickness in the central Arctic and along coastlines when compared to the ensembles with atmospheric initial perturbations. The latter, in contrast, produce significantly larger variability along the ice edge. During the first weeks of the integration, applying the combined perturbations leads to an accumulation of spread from both uncertainties pointing at the importance of including estimates of model uncertainty for subseasonal sea ice predictions. After the first few weeks, however, the differences between ensemble spreads become mostly insignificant so that estimates of seasonal potential sea ice predictability for the Arctic remain largely unaffected by uncertainty estimates in the sea ice dynamics. For the Antarctic sea ice, differences in sea ice thickness spread between the different ensemble configurations are less pronounced throughout the year. Stochastic perturbations are also applied to the sea ice thermodynamics, namely the sea ice albedo parameterization, to investigate the diverse impacts of the incorporation of uncertainty estimates in different parts of the sea ice model, affecting different regions of the polar regions and at different times during the annual cycle.
Parameterization of tree-ring growth in Siberia
NASA Astrophysics Data System (ADS)
Tychkov, Ivan; Popkova, Margarita; Shishov, Vladimir; Vaganov, Eugene
2016-04-01
No doubt, climate-tree growth relationship is an one of the useful and interesting subject of studying in dendrochronology. It provides an information of tree growth dependency on climatic environment, but also, gives information about growth conditions and whole tree-ring growth process for long-term periods. New parameterization approach of the Vaganov-Shashkin process-based model (VS-model) is developed to described critical process linking climate variables with tree-ring formation. The approach (co-called VS-Oscilloscope) is presented as a computer software with graphical interface. As most process-based tree-ring models, VS-model's initial purpose is to describe variability of tree-ring radial growth due to variability of climatic factors, but also to determinate principal factors limiting tree-ring growth. The principal factors affecting on the growth rate of cambial cells in the VS-model are temperature, day light and soil moisture. Detailed testing of VS-Oscilloscope was done for semi-arid area of southern Siberia (Khakassian region). Significant correlations between initial tree-ring chronologies and simulated tree-ring growth curves were obtained. Direct natural observations confirm obtained simulation results including unique growth characteristic for semi-arid habitats. New results concerning formation of wide and narrow rings under different climate conditions are considered. By itself the new parameterization approach (VS-oscilloscope) is an useful instrument for better understanding of various processes in tree-ring formation. The work was supported by the Russian Science Foundation (RSF # 14-14-00219).
Parameterization of wind turbine impacts on hydrodynamics and sediment transport
NASA Astrophysics Data System (ADS)
Rivier, Aurélie; Bennis, Anne-Claire; Pinon, Grégory; Magar, Vanesa; Gross, Markus
2016-10-01
Monopile foundations of offshore wind turbines modify the hydrodynamics and sediment transport at local and regional scales. The aim of this work is to assess these modifications and to parameterize them in a regional model. In the present study, this is achieved through a regional circulation model, coupled with a sediment transport module, using two approaches. One approach is to explicitly model the monopiles in the mesh as dry cells, and the other is to parameterize them by adding a drag force term to the momentum and turbulence equations. Idealised cases are run using hydrodynamical conditions and sediment grain sizes typical from the area located off Courseulles-sur-Mer (Normandy, France), where an offshore windfarm is under planning, to assess the capacity of the model to reproduce the effect of the monopile on the environment. Then, the model is applied to a real configuration on an area including the future offshore windfarm of Courseulles-sur-Mer. Four monopiles are represented in the model using both approaches, and modifications of the hydrodynamics and sediment transport are assessed over a tidal cycle. In relation to local hydrodynamic effects, it is observed that currents increase at the side of the monopile and decrease in front of and downstream of the monopile. In relation to sediment transport effect, the results show that resuspension and erosion occur around the monopile in locations where the current speed increases due to the monopile presence, and sediments deposit downstream where the bed shear stress is lower. During the tidal cycle, wakes downstream of the monopile reach the following monopile and modify the velocity magnitude and suspended sediment concentration patterns around the second monopile.
Ameriflux data used for verification of surface layer parameterizations
NASA Astrophysics Data System (ADS)
Tassone, Caterina; Ek, Mike
2015-04-01
The atmospheric surface-layer parameterization is an important component in a coupled model, as its output, the surface exchange coefficients for momentum, heat and humidity, are used to determine the fluxes of these quantities between the land-surface and the atmosphere. An accurate prediction of these fluxes is therefore required in order to provide a correct forecast of the surface temperature, humidity and ultimately also the precipitation in a model. At the NOAA/NCEP Environmental Modeling Center, a one-dimensional Surface Layer Simulator (SLS) has been developed for simulating the surface layer and its interface. Two different configurations of the SLS exist, replicating in essence the way in which the surface layer is simulated in the GFS and the NAM, respectively. Input data for the SLS are the basic atmospheric quantities of winds, temperature, humidity and pressure evaluated at a specific height above the ground, surface values of temperature and humidity, and the momentum roughness length z0. The output values of the SLS are the surface exchange coefficients for heat and momentum. The exchange coefficients computed by the SLS are then compared with independent estimates derived from measured surface heat fluxes. The SLS is driven by a set of Ameriflux data acquired at 22 stations over a period of several years. This provides a large number of different vegetation characteristics and helps ensure statistical significance. Even though there are differences in the respective surface layer formulations between the GFS and the NAM, they are both based on similarity theory, and therefore lower boundary conditions, i.e. roughness lengths for momentum and heat, and profile functions are among the main components of the surface layer that need to be evaluated. The SLS is a very powerful tool for this type of evaluation. We present the results of the Ameriflux comparison and discuss the implications of our results for the surface layer parameterizations of the NAM
Modeling Jupiter's Quasi Quadrennial Oscillation (QQO) with Wave Drag Parameterizations
NASA Astrophysics Data System (ADS)
Cosentino, Rick; Morales-Juberias, Raul; Greathouse, Thomas K.; Orton, Glenn S.
2016-10-01
The QQO in Jupiter's atmosphere was first discovered after 7.8 micron infrared observations spanning the 1980's and 1990's detected a temperature oscillation near 10 hPa (Orton et al. 1991, Science 252, 537, Leovy et. al. 1991, Nature 354, 380, Friedson 1999, Icarus 137, 34). New observations using the Texas Echelon cross-dispersed Echelle Spectrograph (TEXES), mounted on the NASA Infrared Telescope facility (IRTF), have been used to characterize a complete cycle of the QQO between January 2012 and January 2016 (Greathouse et al. 2016, DPS) . These new observations not only show the thermal oscillation at 10 hPa, but they also show that the QQO extends upwards in Jupiter's atmosphere to pressures as high as 0.4 hPa. We incorporated three different wave-drag parameterizations into the EPIC General Circulation Model (Dowling et al. 1998, Icarus 132, 221) to simulate the observed Jovian QQO temperature signatures as a function of latitude, pressure and time using results from the TEXES datasets as new constraints. Each parameterization produces unique results and offers insight into the spectra of waves that likely exist in Jupiter's atmosphere to force the QQO. High-frequency gravity waves produced from convection are extremely difficult to directly observe but likely contribute a significant portion to the QQO momentum budget. We use different models to simulate the effects of waves such as these, to indirectly explore their spectrum in Jupiter's atmosphere by varying their properties. The model temperature outputs show strong correlations to equatorial and mid-latitude temperature fields retrieved from the TEXES datasets at different epochs. Our results suggest the QQO phenomenon could be more than one alternating zonal jet that descends over time in response to Jovian atmospheric forcing (e.g. gravity waves from convection).Research funding provided by the NRAO Grote Reber Pre-Doctoral Fellowship. Computing resources include the NMT PELICAN cluster and the CISL
NASA Astrophysics Data System (ADS)
Brown, Steven S.; Dubé, William P.; Fuchs, Hendrik; Ryerson, Thomas B.; Wollny, Adam G.; Brock, Charles A.; Bahreini, Roya; Middlebrook, Ann M.; Neuman, J. Andrew; Atlas, Elliot; Roberts, James M.; Osthoff, Hans D.; Trainer, Michael; Fehsenfeld, Frederick C.; Ravishankara, A. R.
2009-04-01
This paper presents determinations of reactive uptake coefficients for N2O5, γ(N2O5), on aerosols from nighttime aircraft measurements of ozone, nitrogen oxides, and aerosol surface area on the NOAA P-3 during Second Texas Air Quality Study (TexAQS II). Determinations based on both the steady state approximation for NO3 and N2O5 and a plume modeling approach yielded γ(N2O5) substantially smaller than current parameterizations used for atmospheric modeling and generally in the range 0.5-6 × 10-3. Dependence of γ(N2O5) on variables such as relative humidity and aerosol composition was not apparent in the determinations, although there was considerable scatter in the data. Determinations were also inconsistent with current parameterizations of the rate coefficient for homogenous hydrolysis of N2O5 by water vapor, which may be as much as a factor of 10 too large. Nocturnal halogen activation via conversion of N2O5 to ClNO2 on chloride aerosol was not determinable from these data, although limits based on laboratory parameterizations and maximum nonrefractory aerosol chloride content showed that this chemistry could have been comparable to direct production of HNO3 in some cases.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
Noise suppression in scatter correction for cone-beam CT
Zhu, Lei; Wang, Jing; Xing, Lei
2009-01-01
Scatter correction is crucial to the quality of reconstructed images in x-ray cone-beam computed tomography (CBCT). Most of existing scatter correction methods assume smooth scatter distributions. The high-frequency scatter noise remains in the projection images even after a perfect scatter correction. In this paper, using a clinical CBCT system and a measurement-based scatter correction, the authors show that a scatter correction alone does not provide satisfactory image quality and the loss of the contrast-to-noise ratio (CNR) of the scatter corrected image may overwrite the benefit of scatter removal. To circumvent the problem and truly gain from scatter correction, an effective scatter noise suppression method must be in place. They analyze the noise properties in the projections after scatter correction and propose to use a penalized weighted least-squares (PWLS) algorithm to reduce the noise in the reconstructed images. Experimental results on an evaluation phantom (Catphan©600) show that the proposed algorithm further reduces the reconstruction error in a scatter corrected image from 10.6% to 1.7% and increases the CNR by a factor of 3.6. Significant image quality improvement is also shown in the results on an anthropomorphic phantom, in which the global noise level is reduced and the local streaking artifacts around bones are suppressed. PMID:19378735
Sorg, T.J.
1991-01-01
The U.S. Environmental Protection Agency proposed new and revised regulations on radionuclide contaminants in drinking water in June 1991. During the 1980's, the Drinking Water Research Division, USEPA conducted a research program to evaluate various technologies to remove radium, uranium and radon from drinking water. The research consisted of laboratory and field studies conducted by USEPA, universities and consultants. The paper summarizes the results of the most significant projects completed. General information is also presented on the general chemistry of the three radionuclides. The information presented indicates that the most practical treatment methods for radium are ion exchange and lime-soda softening and reverse osmosis. The methods tested for radon are aeration and granular activated carbon and the methods for uranium are anion exchange and reverse osmosis.
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions.
Schneider, J P; Norbury, J W; Cucinotta, F A
1995-04-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model
Seaman, N.L.; Kain, J.S.; Deng, A.
1996-04-01
A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.
The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2003-11-21
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
Technology Transfer Automated Retrieval System (TEKTRAN)
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Frank A.
1995-01-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
Technology Transfer Automated Retrieval System (TEKTRAN)
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Oreopoulos, Lazaros; Ackerman, Andrew S.; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A.; Cady-Pereira, Karen E.; Cole, Jason N. S.; Dufresne, Jean -Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J.; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M.
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m^{2}, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentially unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Dipankar, Anurag
2016-06-01
The parameterization of shallow cumuli across a range of model grid resolutions of kilometre-scales faces at least three major difficulties: (1) closure assumptions of conventional parameterization schemes are no longer valid, (2) stochastic fluctuations become substantial and increase with grid resolution, and (3) convective circulations that emerge on the model grids are under-resolved and grid-scale dependent. Here we develop a stochastic parameterization of shallow cumulus clouds to address the first two points, and we study how this stochastic parameterization interacts with the under-resolved convective circulations in a convective case over the ocean. We couple a stochastic model based on a canonical ensemble of shallow cumuli to the Eddy-Diffusivity Mass-Flux parameterization in the icosahedral nonhydrostatic (ICON) model. The moist-convective area fraction is perturbed by subsampling the distribution of subgrid convective states. These stochastic perturbations represent scale-dependent fluctuations around the quasi-equilibrium state of a shallow cumulus ensemble. The stochastic parameterization reproduces the average and higher order statistics of the shallow cumulus case adequately and converges to the reference statistics with increasing model resolution. The interaction of parameterizations with model dynamics, which is usually not considered when parameterizations are developed, causes a significant influence on convection in the gray zone. The stochastic parameterization interacts strongly with the model dynamics, which changes the regime and energetics of the convective flows compared to the deterministic simulations. As a result of this interaction, the emergence of convective circulations in combination with the stochastic parameterization can even be beneficial on the high-resolution model grids.
ERIC Educational Resources Information Center
di Francia, Giuliano Toraldo
1973-01-01
The art of deriving information about an object from the radiation it scatters was once limited to visible light. Now due to new techniques, much of the modern physical science research utilizes radiation scattering. (DF)
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Geometry parameterization and multidisciplinary constrained optimization of coronary stents.
Pant, Sanjay; Bressloff, Neil W; Limbert, Georges
2012-01-01
Coronary stents are tubular type scaffolds that are deployed, using an inflatable balloon on a catheter, most commonly to recover the lumen size of narrowed (diseased) arterial segments. A common differentiating factor between the numerous stents used in clinical practice today is their geometric design. An ideal stent should have high radial strength to provide good arterial support post-expansion, have high flexibility for easy manoeuvrability during deployment, cause minimal injury to the artery when being expanded and, for drug eluting stents, should provide adequate drug in the arterial tissue. Often, with any stent design, these objectives are in competition such that improvement in one objective is a result of trade-off in others. This study proposes a technique to parameterize stent geometry, by varying the shape of circumferential rings and the links, and assess performance by modelling the processes of balloon expansion and drug diffusion. Finite element analysis is used to expand each stent (through balloon inflation) into contact with a representative diseased coronary artery model, followed by a drug release simulation. Also, a separate model is constructed to measure stent flexibility. Since the computational simulation time for each design is very high (approximately 24 h), a Gaussian process modelling approach is used to analyse the design space corresponding to the proposed parameterization. Four objectives to assess recoil, stress distribution, drug distribution and flexibility are set up to perform optimization studies. In particular, single objective constrained optimization problems are set up to improve the design relative to the baseline geometry-i.e. to improve one objective without compromising the others. Improvements of 8, 6 and 15% are obtained individually for stress, drug and flexibility metrics, respectively. The relative influence of the design features on each objective is quantified in terms of main effects, thereby suggesting the
An Empirical Cumulus Parameterization Scheme for a Global Spectral Model
NASA Technical Reports Server (NTRS)
Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.
2004-01-01
Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This
Accuracy of cuticular resistance parameterizations in ammonia dry deposition models
NASA Astrophysics Data System (ADS)
Schrader, Frederik; Brümmer, Christian; Richter, Undine; Fléchard, Chris; Wichink Kruit, Roy; Erisman, Jan Willem
2016-04-01
Accurate representation of total reactive nitrogen (Nr) exchange between ecosystems and the atmosphere is a crucial part of modern air quality models. However, bi-directional exchange of ammonia (NH3), the dominant Nr species in agricultural landscapes, still poses a major source of uncertainty in these models, where especially the treatment of non-stomatal pathways (e.g. exchange with wet leaf surfaces or the ground layer) can be challenging. While complex dynamic leaf surface chemistry models have been shown to successfully reproduce measured ammonia fluxes on the field scale, computational restraints and the lack of necessary input data have so far limited their application in larger scale simulations. A variety of different approaches to modelling dry deposition to leaf surfaces with simplified steady-state parameterizations have therefore arisen in the recent literature. We present a performance assessment of selected cuticular resistance parameterizations by comparing them with ammonia deposition measurements by means of eddy covariance (EC) and the aerodynamic gradient method (AGM) at a number of semi-natural and grassland sites in Europe. First results indicate that using a state-of-the-art uni-directional approach tends to overestimate and using a bi-directional cuticular compensation point approach tends to underestimate cuticular resistance in some cases, consequently leading to systematic errors in the resulting flux estimates. Using the uni-directional model, situations where low ratios of total atmospheric acids to NH3 concentration occur lead to fairly high minimum cuticular resistances, limiting predicted downward fluxes in conditions usually favouring deposition. On the other hand, the bi-directional model used here features a seasonal cycle of external leaf surface emission potentials that can lead to comparably low effective resistance estimates under warm and wet conditions, when in practice an expected increase in the compensation point due to
Parameterization of Fire Injection Height in Large Scale Transport Model
NASA Astrophysics Data System (ADS)
Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.
2012-12-01
The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to
Parameterization of Fire Injection Height in Large Scale Transport Model
NASA Astrophysics Data System (ADS)
Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.
2012-04-01
The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is
Kuo-Nan Liou
2003-12-29
OAK-B135 (a) We developed a 3D radiative transfer model to simulate the transfer of solar and thermal infrared radiation in inhomogeneous cirrus clouds. The model utilized a diffusion approximation approach (four-term expansion in the intensity) employing Cartesian coordinates. The required single-scattering parameters, including the extinction coefficient, single-scattering albedo, and asymmetry factor, for input to the model, were parameterized in terms of the ice water content and mean effective ice crystal size. The incorporation of gaseous absorption in multiple scattering atmospheres was accomplished by means of the correlated k-distribution approach. In addition, the strong forward diffraction nature in the phase function was accounted for in each predivided spatial grid based on a delta-function adjustment. The radiation parameterization developed herein is applied to potential cloud configurations generated from GCMs to investigate broken clouds and cloud-overlapping effects on the domain-averaged heating rate. Cloud inhomogeneity plays an important role in the determination of flux and heating rate distributions. Clouds with maximum overlap tend to produce less heating than those with random overlap. Broken clouds show more solar heating as well as more IR cooling as compared to a continuous cloud field (Gu and Liou, 2001). (b) We incorporated a contemporary radiation parameterization scheme in the UCLA atmospheric GCM in collaboration with the UCLA GCM group. In conjunction with the cloud/radiation process studies, we developed a physically-based cloud cover formation scheme in association with radiation calculations. The model clouds were first vertically grouped in terms of low, middle, and high types. Maximum overlap was then used for each cloud type, followed by random overlap among the three cloud types. Fu and Liou's 1D radiation code with modification was subsequently employed for pixel-by-pixel radiation calculations in the UCLA GCM. We showed
Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.
2009-01-01
Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud
Specialized Knowledge Representation and the Parameterization of Context
Faber, Pamela
2016-01-01
Though instrumental in numerous disciplines, context has no universally accepted definition. In specialized knowledge resources it is timely and necessary to parameterize context with a view to more effectively facilitating knowledge representation, understanding, and acquisition, the main aims of terminological knowledge bases. This entails distinguishing different types of context as well as how they interact with each other. This is not a simple objective to achieve despite the fact that specialized discourse does not have as many contextual variables as those in general language (i.e., figurative meaning, irony, etc.). Even in specialized text, context is an extremely complex concept. In fact, contextual information can be specified in terms of scope or according to the type of information conveyed. It can be a textual excerpt or a whole document; a pragmatic convention or a whole culture; a concrete situation or a prototypical scenario. Although these versions of context are useful for the users of terminological resources, such resources rarely support context modeling. In this paper, we propose a taxonomy of context primarily based on scope (local and global) and further divided into syntactic, semantic, and pragmatic facets. These facets cover the specification of different types of terminological information, such as predicate-argument structure, collocations, semantic relations, term variants, grammatical and lexical cohesion, communicative situations, subject fields, and cultures. PMID:26941674
Parameterization and classification of the protein universe via geometric techniques.
Tendulkar, Ashish V; Wangikar, Pramod P; Sohoni, Milind A; Samant, Vivekanand V; Mone, Chetan Y
2003-11-14
We present a scheme for the classification of 3487 non-redundant protein structures into 1207 non-hierarchical clusters by using recurring structural patterns of three to six amino acids as keys of classification. This results in several signature patterns, which seem to decide membership of a protein in a functional category. The patterns provide clues to the key residues involved in functional sites as well as in protein-protein interaction. The discovered patterns include a "glutamate double bridge" of superoxide dismutase, the functional interface of the serine protease and inhibitor, interface of homo/hetero dimers, and functional sites of several enzyme families. We use geometric invariants to decide superimposability of structural patterns. This allows the parameterization of patterns and discovery of recurring patterns via clustering. The geometric invariant-based approach eliminates the computationally explosive step of pair-wise comparison of structures. The results provide a vast resource for the biologists for experimental validation of the proposed functional sites, and for the design of synthetic enzymes, inhibitors and drugs.
Population models for passerine birds: structure, parameterization, and analysis
Noon, B.R.; Sauer, J.R.; McCullough, D.R.; Barrett, R.H.
1992-01-01
Population models have great potential as management tools, as they use infonnation about the life history of a species to summarize estimates of fecundity and survival into a description of population change. Models provide a framework for projecting future populations, determining the effects of management decisions on future population dynamics, evaluating extinction probabilities, and addressing a variety of questions of ecological and evolutionary interest. Even when insufficient information exists to allow complete identification of the model, the modelling procedure is useful because it forces the investigator to consider the life history of the species when determining what parameters should be estimated from field studies and provides a context for evaluating the relative importance of demographic parameters. Models have been little used in the study of the population dynamics of passerine birds because of: (1) widespread misunderstandings of the model structures and parameterizations, (2) a lack of knowledge of life histories of many species, (3) difficulties in obtaining statistically reliable estimates of demographic parameters for most passerine species, and (4) confusion about functional relationships among demographic parameters. As a result, studies of passerine demography are often designed inappropriately and fail to provide essential data. We review appropriate models for passerine bird populations and illustrate their possible uses in evaluating the effects of management or other environmental influences on population dynamics. We identify environmental influences on population dynamics. We identify parameters that must be estimated from field data, briefly review existing statistical methods for obtaining valid estimates, and evaluate the present status of knowledge of these parameters.
Factors influencing the parameterization of tropical anvils within GCMs
Bradley, M.M.; Chin, H.N.S.
1994-03-01
The overall goal of this project is to improve the representation of anvil clouds and their effects in general circulation models (GCMs). We have concentrated on an important portion of the overall goal; the evolution of cumulus-generated anvil clouds and their effects on the large-scale environment. Because of the large range of spatial and temporal scales involved, we have been using a multi-scale approach. For the early-time generation and development of the citrus anvil we are using a cloud-scale model with a horizontal resolution of 1-2 kilometers, while for the transport of anvils by the large-scale flow we are using a mesoscale model with a horizontal resolution of 10-40 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations to develop an improved cloud parameterization for use in GCMS. The cloud-scale simulation of a midlatitude squall line case and the mesoscale study of a tropical anvil using an anvil generator were presented at the last ARM science team meeting. This paper concentrates on the cloud-scale study of a tropical squall line. Results are compared with its midlatitude counterparts to further our understanding of the formation mechanism of anvil clouds and the sensitivity of radiation to their optical properties.
Parameterization of meandering phenomenon in a stable atmospheric boundary layer
NASA Astrophysics Data System (ADS)
Carvalho, Jonas da Costa; Degrazia, Gervásio Annes; de Vilhena, Marco Túlio; Magalhães, Sergio Garcia; Goulart, Antonio G.; Anfossi, Domenico; Acevedo, Otávio Costa; Moraes, Osvaldo L. L.
2006-08-01
Accounting for the current knowledge of the stable atmospheric boundary layer (ABL) turbulence structure and characteristics, a new formulation for the meandering parameters to be used in a Lagrangian stochastic particle turbulent diffusion model has been derived. That is, expressions for the parameters controlling the meandering oscillation frequency in low wind speed stable conditions are proposed. The classical expression for the meandering autocorrelation function, the turbulent statistical diffusion theory and ABL similarity theory are employed to estimate these parameters. In addition, this new parameterization was introduced into a particular Lagrangian stochastic particle model, which is called Iterative Langevin solution for low wind, validated with the data of Idaho National Laboratory experiments, and compared with others diffusion models. The results of this new approach are shown to agree with the measurements of Idaho experiments and also with those of the other atmospheric diffusion models. The major advance shown in this study is the formulation of the meandering parameters expressed in terms of the characteristic scales (velocity and length scales) describing the physical structure of a turbulent stable boundary layer. These similarity formulas can be used to simulate meandering enhanced diffusion of passive scalars in a low wind speed stable ABL.
New layer thickness parameterization of diffusive convection in the ocean
NASA Astrophysics Data System (ADS)
Zhou, Sheng-Qi; Lu, Yuan-Zheng; Song, Xue-Long; Fer, Ilker
2016-03-01
In the present study, a new parameterization is proposed to describe the convecting layer thickness in diffusive convection. By using in situ observational data of diffusive convection in the lakes and oceans, a wide range of stratification and buoyancy flux is obtained, where the buoyancy frequency N varies between 10-4 and 0.1 s-1 and the heat-related buoyancy flux qT varies between 10-12 and 10-7 m2 s-3. We construct an intrinsic thickness scale, H0 =[qT3 / (κTN8) ] 1 / 4, here κT is the thermal diffusivity. H0 is suggested to be the scale of an energy-containing eddy and it can be alternatively represented as H0 = ηRebPr1/4, here η is the dissipation length scale, Reb is the buoyant Reynolds number, and Pr is the Prandtl number. It is found that the convective layer thickness H is directly linked to the stability ratio Rρ and H0 with the form of H ∼ (Rρ - 1)2H0. The layer thickness can be explained by the convective instability mechanism. To each convective layer, its thickness H reaches a stable value when its thermal boundary layer develops to be a new convecting layer.
Parameterization of mires in a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Yurova, Alla; Tolstykh, Mikhail; Nilsson, Mats; Sirin, Andrey
2014-11-01
Mires (peat-accumulating wetlands) occupy 8.1% of Russian territory and are especially numerous in the western Siberian Lowlands, where they can significantly modify atmospheric heat and water balances. They also influence air temperatures and humidity in the boundary layers closest to the earth's surface. The purpose of our study was to incorporate the influence of mires into the SL-AV numerical weather prediction model, which is used operationally in the Hydrometeorological Center of Russia. This was done by adjusting the multilayer soil component (by modifying the peat thermal conductivity in the heat diffusion equation and reformulating the lower boundary condition for Richard's equation), and reformulating both the evapotranspiration and runoff from mires. When evaporation from mires was incorporated into the SL-AV model, the latent heat flux in the areas dominated by mires increased strongly, resulting in surface cooling and hence reductions in the sensible heat flux and outgoing terrestrial long-wave radiation. Presented results show that including mires significantly decreased the bias and RMSE of predictions of temperature and relative humidity 2 m above the ground for lead times of 12, 36, and 60 h from 00 h Coordinated Universal Time (evening conditions), but did not eliminate the bias in forecasts for lead times of 24, 48, and 72 h (morning conditions) in Siberia. Different parameterizations of mire evapotranspiration are also compared.
Comparing in situ and satellite-based parameterizations of oceanic whitecaps
NASA Astrophysics Data System (ADS)
Paget, Aaron C.; Bourassa, Mark A.; Anguelova, Magdalena D.
2015-04-01
The majority of the parameterizations developed to estimate whitecap fraction uses a stability-dependent 10 m wind (U10) measured in situ, but recent efforts to use satellite-reported equivalent neutral winds (U10EN) to estimate whitecap fraction with the same parameterizations introduce additional error. This study identifies and quantifies the differences in whitecap parameterizations caused by U10 and U10EN for the active and total whitecap fractions. New power law coefficients are presented for both U10 and U10EN parameterizations based on available in situ whitecap observations. One-way analysis of variance (ANOVA) tests are performed on the residuals of the whitecap parameterizations and the whitecap observations and identify that parameterizations in terms of U10 and U10EN perform similarly. The parameterizations are also tested against the satellite-based WindSat Whitecap Database to assess differences. The improved understanding aids in estimating whitecap fraction globally using satellite products and in determining the global effects of whitecaps on air-sea processes and remote sensing of the surface.
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2004-05-06
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
A trans-dimensional polynomial-spline parameterization for gradient-based geoacoustic inversion.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
This paper presents a polynomial spline-based parameterization for trans-dimensional geoacoustic inversion. The parameterization is demonstrated for both simulated and measured data and shown to be an effective method of representing sediment geoacoustic profiles dominated by gradients, as typically occur, for example, in muddy seabeds. Specifically, the spline parameterization is compared using the deviance information criterion (DIC) to the standard stack-of-homogeneous layers parameterization for the inversion of bottom-loss data measured at a muddy seabed experiment site on the Malta Plateau. The DIC is an information criterion that is well suited to trans-D Bayesian inversion and is introduced to geoacoustics in this paper. Inversion results for both parameterizations are in good agreement with measurements on a sediment core extracted at the site. However, the spline parameterization more accurately resolves the power-law like structure of the core density profile and provides smaller overall uncertainties in geoacoustic parameters. In addition, the spline parameterization is found to be more parsimonious, and hence preferred, according to the DIC. The trans-dimensional polynomial spline approach is general, and applicable to any inverse problem for gradient-based profiles. [Work supported by ONR.].
A cumulus parameterization including mass fluxes, vertical momentum dynamics, and mesoscale effects
Donner, L.J. )
1993-03-15
A formulation for parameterizing cumulus convection, which treats cumulus vertical momentum dynamics and mass fluxes consistently, is presented. This approach predicts the penetrative extent of cumulus updrafts on the basis of their vertical momentum and provides a basis for treating cumulus microphysics using formulations that depend on vertical velocity. Treatments for cumulus microphysics are essential if the water budgets of convective systems are to be evaluated for treating mesoscale stratiform processes associated with convection, which are important for radiative interactions influencing climate. The water budget of the cumulus updrafts is used to drive a semi-empirical parameterization for the large-scale effects of the mesoscale circulations associated with deep convection. The parameterization was applied to two tropical thermodynamic profiles whose diagnosed forcing by convective systems differed significantly. The deepest of the updrafts penetrated the upper troposphere, while the shallower updrafts penetrated into the region of the mesoscale anvil. The relative numbers of cumulus updrafts of characteristic vertical velocities comprising the parameterized ensemble corresponded well with available observations. The large-scale heating produced by the ensemble without mesoscale circulations was concentrated at lower heights than observed or was characterized by excessive peak magnitudes. An unobserved large-scale source of water vapor was produced in the middle troposphere. When the parameterization for mesoscale effects was added, the large-scale thermal and moisture forcing predicted by the parameterization agreed well with observations for both cases. The significance of mesoscale processes suggests that future cumulus parameterization development will need to treat some radiative processes.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.« less
Multiple scattering technique lidar
NASA Technical Reports Server (NTRS)
Bissonnette, Luc R.
1992-01-01
The Bernouilli-Ricatti equation is based on the single scattering description of the lidar backscatter return. In practice, especially in low visibility conditions, the effects of multiple scattering can be significant. Instead of considering these multiple scattering effects as a nuisance, we propose here to use them to help resolve the problems of having to assume a backscatter-to-extinction relation and specifying a boundary value for a position far remote from the lidar station. To this end, we have built a four-field-of-view lidar receiver to measure the multiple scattering contributions. The system has been described in a number of publications that also discuss preliminary results illustrating the multiple scattering effects for various environmental conditions. Reported here are recent advances made in the development of a method of inverting the multiple scattering data for the determination of the aerosol scattering coefficient.
Assessment of Noah land surface model with various runoff parameterizations over a Tibetan river
NASA Astrophysics Data System (ADS)
Zheng, Donghai; Van Der Velde, Rogier; Su, Zhongbo; Wen, Jun; Wang, Xin
2017-02-01
Runoff parameterizations currently adopted by the (i) Noah-MP model, (ii) Community Land Model (CLM), and (iii) CLM with variable infiltration capacity hydrology (CLM-VIC) are incorporated into the structure of Noah land surface model, and the impact of these parameterizations on the runoff simulations is investigated for a Tibetan river. Four numerical experiments are conducted with the default Noah and three aforementioned runoff parameterizations. Each experiment is forced with the same set of atmospheric forcing, vegetation, and soil parameters. In addition, the Community Earth System Model database provides the maximum surface saturated area parameter for the Noah-MP and CLM parameterizations. A single-year recurrent spin-up is adopted for the initialization of each model run to achieve equilibrium states. Comparison with discharge measurements shows that each runoff parameterization produces significant differences in the separation of total runoff into surface and subsurface components and that the soil water storage-based parameterizations (Noah and CLM-VIC) outperform the groundwater table-based parameterizations (Noah-MP and CLM) for the seasonally frozen and high-altitude Tibetan river. A parameter sensitivity experiment illustrates that this underperformance of the groundwater table-based parameterizations cannot be resolved through calibration. Further analyses demonstrate that the simulations of other surface water and energy budget components are insensitive to the selected runoff parameterizations, due to the strong control of the atmosphere on simulated land surface fluxes induced by the diurnal dependence of the roughness length for heat transfer and the large water retention capacity of the highly organic top soils over the plateau.
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
Scattering resonances in the extreme quantum limit
NASA Astrophysics Data System (ADS)
Hersch, Jesse Shines
This thesis addresses topics in low energy scattering in quantum mechanics, in particular, resonance phenomena. Hence the title: the phrase ``extreme quantum limit'' refers to the situation when the wavelengths of the particles in the system are larger than every other scale, so that the behavior is far into the quantum regime. A powerful tool in the problems of low energy scattering is the point scatterer model, and will be used extensively throughout the thesis. Therefore, we begin with a thorough introduction to this model in Chapter 2. As a first application of the point scatterer model, we will investigate the phenomenon of the proximity resonance, which is one example of strange quantum behavior appearing at low energy. Proximity resonances will be addressed theoretically in Chapter 3, and experimentally in Chapter 4. Threshold resonances, another type of low energy scattering resonance, are considered in Chapter 5, along with their connection to the Efimov and Thomas effects, and scattering in the presence of an external confining potential. Although the point scatterer model will serve us well in the work presented here, it does have its limitations. These limitations will be removed in Chapter 6, where we describe how to extend the model to include higher partial waves. In Chapter 7, we extend the model one step further, and illustrate how to treat vector wave scattering with the model. Finally, in Chapter 8 we will depart from the topic of low energy scattering and investigate the influence of diffraction on an open quantum mechanical system, again both experimentally and theoretically.
Layer filtering for seafloor scatterers imaging.
Pinson, S; Holland, C W
2015-05-01
The image source method in acoustics is well known to simulate reverberation. It has also been recently used for characterization of seafloor sound-speed structure. The idea is to detect image sources by imaging techniques to obtain information about the environment. In this paper, the idea is to use the detection of image sources to remove reflections from plane interfaces in recorded signals and perform imaging with this filtered signal. This imaging process highlights scatterers because their wave front shapes are different than those from plane interfaces. Applications can be in seafloor buried object detection or scattering analysis from interface roughnesses or volume heterogeneities.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Towards a parameterization of convective wind gusts in Sahel
NASA Astrophysics Data System (ADS)
Largeron, Yann; Guichard, Françoise; Bouniol, Dominique; Couvreux, Fleur; Birch, Cathryn; Beucher, Florent
2014-05-01
] who focused on the wet tropical Pacific region, and linked wind gusts to convective precipitation rates alone, here, we also analyse the subgrid wind distribution during convective events, and quantify the statistical moments (variance, skewness and kurtosis) in terms of mean wind speed and convective indexes such as DCAPE. Next step of the work will be to formulate a parameterization of the cold pool convective gust from those probability density functions and analytical formulaes obtained from basic energy budget models. References : [Carslaw et al., 2010] A review of natural aerosol interactions and feedbacks within the earth system. Atmospheric Chemistry and Physics, 10(4):1701{1737. [Engelstaedter et al., 2006] North african dust emissions and transport. Earth-Science Reviews, 79(1):73{100. [Knippertz and Todd, 2012] Mineral dust aerosols over the sahara: Meteorological controls on emission and transport and implications for modeling. Reviews of Geophysics, 50(1). [Marsham et al., 2011] The importance of the representation of deep convection for modeled dust-generating winds over west africa during summer.Geophysical Research Letters, 38(16). [Marticorena and Bergametti, 1995] Modeling the atmospheric dust cycle: 1. design of a soil-derived dust emission scheme. Journal of Geophysical Research, 100(D8):16415{16. [Menut, 2008] Sensitivity of hourly saharan dust emissions to ncep and ecmwf modeled wind speed. Journal of Geophysical Research: Atmospheres (1984{2012), 113(D16). [Pierre et al., 2012] Impact of vegetation and soil moisture seasonal dynamics on dust emissions over the sahel. Journal of Geophysical Research: Atmospheres (1984{2012), 117(D6). [Redelsperger et al., 2000] A parameterization of mesoscale enhancement of surface fluxes for large-scale models. Journal of climate, 13(2):402{421.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of
Impact of Roughness Parameterization on Mistral and Tramontane Simulations
NASA Astrophysics Data System (ADS)
Obermann, Anika; Edelmann, Benedikt; Ahrens, Bodo
2016-04-01
The Mistral and Tramontane are mesoscale winds in the Mediterranean region that travel through valleys in southern France. The cold and dry Mistral blows from the north to northwest, and travels down the Rhône valley, between the Alps and Massif Central. The Tramontane travels the Aude valley between the Massif Central and Pyrenees. Over the sea, these winds cause deep-water generation, and thus impact the hydrological cycle of the Mediterranean Sea. The occurrence and characteristics of Mistral and Tramontane depend on the synoptic situation, the channeling effects through mountain barriers, and land and sea surface characteristics. We evaluate Mistral and Tramontane wind speed and direction patterns in several regional climate models from the MedCORDEX framework with respect to these challenges for modeling. The effect of sea surface roughness parameterization on the quality of wind speed and direction modeling is evaluated. Emphasis is on spatial patterns in the areas of Mistral and Tramontane as well as the overlapping zone. The wind speed development and error propagation along the wind tracks are evaluated. Windy days (with Mistral and Tramontane) are distinguished from not windy days. A Bayesian Network is used to filter for days on which model sea level pressure fields show a Mistral/Tramontane pattern or not. Furthermore, time series of Mistral and Tramontane days events in historical and projection runs are derived from sea level pressure patterns. The development of Mistral and Tramontane days per year and the average length of such events are studied, as well as the development of wind speeds.
Cirrus cloud model parameterizations: Incorporating realistic ice particle generation
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Dodd, G. C.; Starr, David OC.
1990-01-01
Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.
Search for subgrid scale parameterization by projection pursuit regression
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Moin, Parviz
1992-01-01
The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important
Evapotranspiration parameterizations at a grass site in Florida, USA
Rizou, M.; Sumner, David M.; Nnadi, F.
2007-01-01
In spite of the fact that grasslands account for about 40% of the ice-free global terrestrial land cover, their contribution to the surface exchanges of energy and water in local and regional scale is so far uncertain. In this study, the sensitivity of evapotranspiration (ET) and other energy fluxes to wetness variables, namely the volumetric Soil Water Content (SWC) and Antecedent Precipitation Index (API), over a non-irrigated grass site in Central Florida, USA (28.049 N, 81.400 W) were investigated. Eddy correlation and soil water content measurements were taken by USGS (U.S. Geological Survey) at the grass study site, within 100 m of a SFWMD (South Florida Water Management District) weather station. The soil is composed of fine sands and it is mainly covered by Paspalum notatum (bahia grass). Variable soil wetness conditions with API bounds of about 2 to 160 mm and water table levels of 0.03 to 1.22 m below ground surface, respectively, were observed throughout the year 2004. The Bowen ratio exhibited an average of 1 and values larger than 2 during few dry days. The daytime average ET was classified into two stages, first stage (energy-limited) and second stage (water- limited) based on the water availability. The critical values of API and SWC were found to be about 56 mm and 0.17 respectively, with the second one being approximately 33% of the SWC at saturation. The ET values estimated by the simple Priestley-Taylor (PT) method were compared to the actual values. The PT coefficient varied from a low bound of approximately 0.4 to a peak of 1.21. Simple relationships for the PT empirical factor were employed in terms of SWC and API to improve the accuracy of the second stage observations. The results of the ET parameterizations closely match eddy-covariance flux values on daily and longer time steps.
Parameterization of small intestinal water volume using PBPK modeling.
Maharaj, Anil; Fotaki, Nikoletta; Edginton, Andrea
2015-01-25
To facilitate accurate predictions of oral drug disposition, mechanistic absorption models require optimal parameterization. Furthermore, parameters should maintain a biological basis to establish confidence in model predictions. This study will serve to calculate an optimal parameter value for small intestinal water volume (SIWV) using a model-based approach. To evaluate physiologic fidelity, derived volume estimates will be compared to experimentally-based SIWV determinations. A compartmental absorption and transit (CAT) model, created in Matlab-Simulink®, was integrated with a whole-body PBPK model, developed in PK-SIM 5.2®, to provide predictions of systemic drug disposition. SIWV within the CAT model was varied between 52.5mL and 420mL. Simulations incorporating specific SIWV values were compared to pharmacokinetic data from compounds exhibiting solubility induced non-proportional changes in absorption using absolute average fold-error. Correspondingly, data pertaining to oral administration of acyclovir and chlorothiazide were utilized to derive estimates of SIWV. At 400mg, a SIWV of 116mL provided the best estimates of acyclovir plasma concentrations. A similar SIWV was found to best depict the urinary excretion pattern of chlorothiazide at a dose of 100mg. In comparison, experimentally-based estimates of SIWV within adults denote a central tendency between 86 and 167mL. The derived SIWV (116mL) represents the optimal parameter value within the context of the developed CAT model. This result demonstrates the biological basis of the widely utilized CAT model as in vivo SIWV determinations correspond with model-based estimates.
A statistically derived parameterization for the collagen triple-helix.
Rainey, Jan K; Goh, M Cynthia
2002-11-01
The triple-helix is a unique secondary structural motif found primarily within the collagens. In collagen, it is a homo- or hetero-tripeptide with a repeating primary sequence of (Gly-X-Y)(n), displaying characteristic peptide backbone dihedral angles. Studies of bulk collagen fibrils indicate that the triple-helix must be a highly repetitive secondary structure, with very specific constraints. Primary sequence analysis shows that most collagen molecules are primarily triple-helical; however, no high-resolution structure of any entire protein is yet available. Given the drastic morphological differences in self-assembled collagen structures with subtle changes in assembly conditions, a detailed knowledge of the relative locations of charged and sterically bulky residues in collagen is desirable. Its repetitive primary sequence and highly conserved secondary structure make collagen, and the triple-helix in general, an ideal candidate for a general parameterization for prediction of residue locations and for the use of a helical wheel in the prediction of residue orientation. Herein, a statistical analysis of the currently available high-resolution X-ray crystal structures of model triple-helical peptides is performed to produce an experimentally based parameter set for predicting peptide backbone and C(beta) atom locations for the triple-helix. Unlike existing homology models, this allows easy prediction of an entire triple-helix structure based on all existing high-resolution triple-helix structures, rather than only on a single structure or on idealized parameters. Furthermore, regional differences based on the helical propensity of residues may be readily incorporated. The parameter set is validated in terms of the predicted bond lengths, backbone dihedral angles, and interchain hydrogen bonding.
Parameterizations for shielding electron accelerators based on Monte Carlo studies
P. Degtyarenko; G. Stapleton
1996-10-01
Numerous recipes for designing lateral slab neutron shielding for electron accelerators are available and each generally produces rather similar results for shield thicknesses of about 2 m of concrete and for electron beams with energy in the 1 to 10 GeV region. For thinner or much thicker shielding the results tend to diverge and the standard recipes require modification. Likewise for geometries other than lateral to the beam direction further corrections are required so that calculated results are less reliable and hence additional and costly conservatism is needed. With the adoption of Monte Carlo (MC) methods of transporting particles a much more powerful way of calculating radiation dose rates outside shielding becomes available. This method is not constrained by geometry, although deep penetration problems need special statistical treatment, and is an excellent approach to solving any radiation transport problem providing the method has been properly checked against measurements and is free from the well known errors common to such computer methods. This present paper utilizes the results of MC calculations based on a nuclear fragmentation model named DINREG using the MC transport code GEANT and models them with the normal two parameter shielding expressions. Because the parameters can change with electron beam energy, angle to the electron beam direction and target material, the parameters are expressed as functions of some of these variables to provide a universal equations for shielding electron beams which can used rather simply for deep penetration problems in simple geometry without the time consuming computations needed in the original MC programs. A particular problem with using simple parameterizations based on the uncollided flux is that approximations based on spherical geometry might not apply to the more common cylindrical cases used for accelerator shielding. This source of error has been discussed at length by Stevenson and others. To study
Observations and parameterization of the stratospheric electrical conductivity
NASA Astrophysics Data System (ADS)
Hu, Hua; Holzworth, Robert H.
1996-12-01
conductivity is parameterized based on the measurements, and a simple empirical model is presented in geographic coordinates.
Parameterizations of Dry Deposition for the Industrial Source Complex Model
NASA Astrophysics Data System (ADS)
Wesely, M. L.; Doskey, P. V.; Touma, J. S.
2002-05-01
Improved algorithms have been developed to simulate the dry deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex model system. The dry deposition velocities are described in conventional resistance schemes, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake of gases at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. Standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed to provide a means to evaluate the role of lipid solubility on uptake by the waxy outer cuticle of vegetative plant leaves. The dry deposition velocities of particulate HAPs are simulated with a resistance scheme in which deposition velocity is described for two size modes: a fine mode with particles less than about 2.5 microns in diameter and a coarse mode with larger particles but excluding very coarse particles larger than about 10 microns in diameter. For the fine mode, the deposition velocity is calculated with a parameterization based on observations of sulfate dry deposition. For the coarse mode, a representative settling velocity is assumed. Then the total deposition velocity is estimated as the sum of the two deposition velocities weighted according to the amount of mass expected in the two modes.
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and
Expressive Single Scattering for Light Shaft Stylization.
Kol, Timothy R; Klehm, Oliver; Seidel, Hans-Peter; Eisemann, Elmar
2016-04-14
Light scattering in participating media is a natural phenomenon that is increasingly featured in movies and games, as it is visually pleasing and lends realism to a scene. In art, it may further be used to express a certain mood or emphasize objects. Here, artists often rely on stylization when creating scattering effects, not only because of the complexity of physically correct scattering, but also to increase expressiveness. Little research, however, focuses on artistically influencing the simulation of the scattering process in a virtual 3D scene. We propose novel stylization techniques, enabling artists to change the appearance of single scattering effects such as light shafts. Users can add, remove, or enhance light shafts using occluder manipulation. The colors of the light shafts can be stylized and animated using easily modifiable transfer functions. Alternatively, our system can optimize a light map given a simple user input for a number of desired views in the 3D world. Finally, we enable artists to control the heterogeneity of the underlying medium. Our stylized scattering solution is easy to use and compatible with standard rendering pipelines. It works for animated scenes and can be executed in real time to provide the artist with quick feedback.
Parameterizing Aggregation Rates: Results of cold temperature ice-ash hydrometeor experiments
NASA Astrophysics Data System (ADS)
Courtland, L. M.; Dufek, J.; Mendez, J. S.; McAdams, J.
2014-12-01
Recent advances in the study of tephra aggregation have indicated that (i) far-field effects of tephra sedimentation are not adequately resolved without accounting for aggregation processes that preferentially remove the fine ash fraction of volcanic ejecta from the atmosphere as constituent pieces of larger particles, and (ii) the environmental conditions (e.g. humidity, temperature) prevalent in volcanic plumes may significantly alter the types of aggregation processes at work in different regions of the volcanic plume. The current research extends these findings to explore the role of ice-ash hydrometeor aggregation in various plume environments. Laboratory experiments utilizing an ice nucleation chamber allow us to parameterize tephra aggregation rates under the cold (0 to -50 C) conditions prevalent in the upper regions of volcanic plumes. We consider the interaction of ice-coated tephra of variable thickness grown in a controlled environment. The ice-ash hydrometers interact collisionally and the interaction is recorded by a number of instruments, including high speed video to determine if aggregation occurs. The electric charge on individual particles is examined before and after collision to examine the role of electrostatics in the aggregation process and to examine the charge exchange process. We are able to examine how sticking efficiency is related to both the relative abundance of ice on a particle as well as to the magnitude of the charge carried by the hydrometeor. We here present preliminary results of these experiments, the first to constrain aggregation efficiency of ice-ash hydrometeors, a parameter that will allow tephra dispersion models to use near-real-time meteorological data to better forecast particle residence time in the atmosphere.
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracing computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the
Adaptive multi-scale parameterization for one-dimensional flow in unsaturated porous media
NASA Astrophysics Data System (ADS)
Hayek, Mohamed; Lehmann, François; Ackerer, Philippe
2008-01-01
In the analysis of the unsaturated zone, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the porous media. Adaptative multi-scale parameterization consists in solving the problem through successive approximations by refining the parameter at the next finer scale all over the domain and stopping the process when the refinement does not induce significant decrease of the objective function any more. In this context, the refinement indicators algorithm provides an adaptive parameterization technique that opens the degrees of freedom in an iterative way driven at first order by the model to locate the discontinuities of the sought parameters. We present a refinement indicators algorithm for adaptive multi-scale parameterization that is applicable to the estimation of multi-dimensional hydraulic parameters in unsaturated soil water flow. Numerical examples are presented which show the efficiency of the algorithm in case of noisy data and missing data.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
NASA Technical Reports Server (NTRS)
Stephenson-Graves, D.
1982-01-01
An analysis is performed to qualitatively compare the seasonal variation in emitted longwave radiation over land and over water areas as determined from 12 months of Nimbus 6 satellite data with that defined from parameterizations of this radiation budget component. These variations are noted when land and water surface areas are mapped to corresponding areas at the 'top' of the atmosphere. Variations of a surface-temperature-dependent parameterization of emitted longwave radiation originally suggested by Budyko (1969) are considered. The longwave radiation parameterizations indicate small differences between land and water profiles of emitted longwave radiation at the top of an atmospheric column in low latitudes in comparison to large differences in this feature shown to exist in the satellite data. The small differences are noted in linear parameterizations of emitted flux when zonally-averaged satellite data are used to define equation coefficients.
Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model
This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional
Turbulence Parameterizations for Convective Boundary Layers in High-Resolution Mesoscale Models
2003-12-01
radars are especially dependent on clear weather conditions for effective operations. For example, dust storms and low cloud cover were weather events...PAGES 160 14. SUBJECT TERMS Grid Resolution, Parameterizations, Boundary Layer, Mesoscale Modeling, COAMPS . 16. PRICE CODE 17. SECURITY...Parameterizations in COAMPS using aircraft measurements. This work was also supported in part by a grant of computer time from the DOD high
Parameterized spectral distributions for meson production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.
1995-01-01
Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.
A New Visibility Parameterization for Warm-Fog Applications in Numerical Weather Prediction Models
NASA Astrophysics Data System (ADS)
Gultepe, I.; Müller, M. D.; Boybeyi, Z.
2006-11-01
The objective of this work is to suggest a new warm-fog visibility parameterization scheme for numerical weather prediction (NWP) models. In situ observations collected during the Radiation and Aerosol Cloud Experiment, representing boundary layer low-level clouds, were used to develop a parameterization scheme between visibility and a combined parameter as a function of both droplet number concentration Nd and liquid water content (LWC). The current NWP models usually use relationships between extinction coefficient and LWC. A newly developed parameterization scheme for visibility, Vis = f(LWC, Nd), is applied to the NOAA Nonhydrostatic Mesoscale Model. In this model, the microphysics of fog was adapted from the 1D Parameterized Fog (PAFOG) model and then was used in the lower 1.5 km of the atmosphere. Simulations for testing the new parameterization scheme are performed in a 50-km innermost-nested simulation domain using a horizontal grid spacing of 1 km centered on Zurich Unique Airport in Switzerland. The simulations over a 10-h time period showed that visibility differences between old and new parameterization schemes can be more than 50%. It is concluded that accurate visibility estimates require skillful LWC as well as Nd estimates from forecasts. Therefore, the current models can significantly over-/underestimate Vis (with more than 50% uncertainty) depending on environmental conditions. Inclusion of Nd as a prognostic (or parameterized) variable in parameterizations would significantly improve the operational forecast models.
A parameterization for the absorption of solar radiation by water vapor in the earth's atmosphere
NASA Technical Reports Server (NTRS)
Wang, W.-C.
1976-01-01
A parameterization for the absorption of solar radiation as a function of the amount of water vapor in the earth's atmosphere is obtained. Absorption computations are based on the Goody band model and the near-infrared absorption band data of Ludwig et al. A two-parameter Curtis-Godson approximation is used to treat the inhomogeneous atmosphere. Heating rates based on a frequently used one-parameter pressure-scaling approximation are also discussed and compared with the present parameterization.
Zero-D sensitivity studies with the NCAR CCM land surface parameterization scheme
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Wilson, M. F.; Dickinson, R. E.
1986-05-01
The boundary package of a version of the NCAR Community Climate Model was run as a stand alone zero-dimensional model. Soil data and a soil parameterization scheme were added to the vegetation parameterization. Sensitivity experiments, including conditions representative of a low latitude evergreen forest, a sand desert, a high latitude coniferous forest, high latitude tundra, and prairie grassland were undertaken. The land surface scheme shows the greatest sensitivity to soil texture variation, particularly to changes in hydraulic conductivity and diffusivity.
Partially strong WW scattering
Cheung Kingman; Chiang Chengwei; Yuan Tzuchiang
2008-09-01
What if only a light Higgs boson is discovered at the CERN LHC? Conventional wisdom tells us that the scattering of longitudinal weak gauge bosons would not grow strong at high energies. However, this is generally not true. In some composite models or general two-Higgs-doublet models, the presence of a light Higgs boson does not guarantee complete unitarization of the WW scattering. After partial unitarization by the light Higgs boson, the WW scattering becomes strongly interacting until it hits one or more heavier Higgs bosons or other strong dynamics. We analyze how LHC experiments can reveal this interesting possibility of partially strong WW scattering.
Advancing x-ray scattering metrology using inverse genetic algorithms
NASA Astrophysics Data System (ADS)
Hannon, Adam F.; Sunday, Daniel F.; Windover, Donald; Joseph Kline, R.
2016-07-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real-space structure in periodic gratings measured using critical dimension small-angle x-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real-space structure of our nanogratings. The study shows that for x-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Advancing X-ray scattering metrology using inverse genetic algorithms.
Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Xie, S.; Cederwall, R.T.; Yio, J.J.; Xu, K.M.
2001-05-17
Parameterization of cumulus convection in general circulation model (GCM) has been recognized as one of the most important and complex issues in the model physical parameterizations. In earlier studies, most cumulus parameterizations were developed and evaluated using data observed over tropical oceans, such as the GATE (the Global Atmospheric Research Program's Atlantic Tropical Experiment) data. This is partly due to inadequate field measurements in the midlatitudes. In this study, we compare and evaluate a total of eight types of the state-of-the-art cumulus parameterizations used in fifteen Single-Column Models (SCM) under the summertime midlatitude continental conditions using the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) summer 1997 Intensive Operational Period (IOP) data, which covers several continental convection events. The purpose is to systematically compare and evaluate the performance of these cumulus parameterizations under summertime midlatitude continental conditions. Through the study we hope to identify strengths and weaknesses of these cumulus parameterizations that will lead to further improvements. Here, we briefly present our most interesting results. A full description of this study can be seen in Xie et al. (2001).
Using a resolution function to regulate parameterizations of oceanic mesoscale eddy effects
NASA Astrophysics Data System (ADS)
Hallberg, Robert
2013-12-01
Mesoscale eddies play a substantial role in the dynamics of the ocean, but the dominant length-scale of these eddies varies greatly with latitude, stratification and ocean depth. Global numerical ocean models with spatial resolutions ranging from 1° down to just a few kilometers include both regions where the dominant eddy scales are well resolved and regions where the model's resolution is too coarse for the eddies to form, and hence eddy effects need to be parameterized. However, common parameterizations of eddy effects via a Laplacian diffusion of the height of isopycnal surfaces (a Gent-McWilliams diffusivity) are much more effective at suppressing resolved eddies than in replicating their effects. A variant of the Phillips model of baroclinic instability illustrates how eddy effects might be represented in ocean models. The ratio of the first baroclinic deformation radius to the horizontal grid spacing indicates where an ocean model could explicitly simulate eddy effects; a function of this ratio can be used to specify where eddy effects are parameterized and where they are explicitly modeled. One viable approach is to abruptly disable all the eddy parameterizations once the deformation radius is adequately resolved; at the discontinuity where the parameterization is disabled, isopycnal heights are locally flattened on the one side while eddies grow rapidly off of the enhanced slopes on the other side, such that the total parameterized and eddy fluxes vary continuously at the discontinuity in the diffusivity. This approach should work well with various specifications for the magnitude of the eddy diffusivities.
NASA Astrophysics Data System (ADS)
Wróbel, Iwona; Piskozub, Jacek
2016-04-01
Wind speed has a disproportionate role in the forming of the climate as well it is important part in calculate of the air-sea interaction thanks which we can study climate change. It influences on mass, momentum and energy fluxes and the standard way of parametrizing those fluxes is use this variable. However, the very functions used to calculate fluxes from winds have evolved over time and still have large differences (especially in the case of aerosol sources function). As we have shown last year at the EGU conference (PICO presentation EGU2015-11206-1) and in recent public article (OSD 12,C1262-C1264,2015) there is a lot of uncertainties in the case of air-sea CO2 fluxes. In this study we calculated regional and global mass and momentum fluxes based on several wind speed climatologies. To do this we use wind speed from satellite data in FluxEngine software created within OceanFlux GHG Evolution project. Our main area of interest is European Arctic because of the interesting air-sea interaction physics (six-monthly cycle, strong wind and ice cover) but because of better data coverage we have chosen the North Atlantic as a study region to make it possible to compare the calculated fluxes to measured ones. An additional reason was the importance of the area for the North Hemisphere climate, and especially for Europe. The study is related to an ESA funded OceanFlux GHG Evolution project and is meant to be part of a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). We have used a modified version FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) for calculating trace gas fluxes to derive two purely wind driven (at least in the simplified form used in their parameterizations) fluxes. The modifications included removing gas transfer velocity formula from the toolset and replacing it with the respective formulas for momentum transfer and mass (aerosol production
Submoment expansion of neutron-scattering sources
Williams, M.L.
2000-02-01
The submoment method was originally introduced to compute spherical harmonic moments of the neutron elastic-scattering source for discrete ordinates calculations with pointwise nuclear data. This work extends the submoment method to include discrete-level inelastic, as well as elastic, S-wave reactions. New applications of the submoment expansion to compute spherical harmonic moments of the slowing-down density and the elastic removal rate are also presented. Numerical stability and computational considerations are discussed.
Rainfall droplet size distributions (DSD) parameterization: physics and sensibility
NASA Astrophysics Data System (ADS)
Cecchini, M. A.; Machado, L.
2014-12-01
The CHUVA project (Cloud processes of tHe main precipitation systems in Brazil: A contribUtion to cloud resolVing modeling and to the GPM (GlobAl Precipitation Measurement)) is a Brazillian experiment that aims to understand the several cloud processes that occur in different precipitating regimes. At present, the CHUVA project has conducted 6 field campaigns, the last one being in Manaus jointly with GoAmazon, IARA and ACRIDICON. The main focus of the present study is to bring into perspective the different characteristics of precipitation that reaches the surface in Brazil over several locations. To do so, disdrometer data is analyzed in detail, employing a Gamma fit for each DSD measurement which provides the respective parameters to be studied. Those are disposed in a 3D space, each axis corresponding to one parameter, and the patterns are analyzed. A correlation between the Gamma parameters is defined as a parametric surface that fits the observations with errors smaller than 10% and R2 greater than 0.95. In this way, one parameter can be estimated with respect to the other two, reducing the degrees of freedom of the problem from 3 to 2. As the 3 parameters are defined over this surface, it's possible to obtain a surface representing integral DSD properties such as rainfall intensity (RI). Sensibilities tests are conducted on this estimation and also on other DSD characteristics such as total droplet concentrations and mean mass-weighted diameter. It's shown that the DSD integral properties are generally very sensitive to the Gamma parameters. Nonetheless, the sensibility varies over the surface, being higher in a region where the parameters are not balanced (i.e. a relatively high value in one parameter and low values on the other two). It's suggested that any study proposing parameterization/estimation of DSD properties should be aware of this region of high sensitivity. To further the collaboration with GoAmazon and ACRIDICON, the disdrometer results
Parameterization and Monte Carlo solutions to PDF evolution equations
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Schüler, Lennart; Attinger, Sabine; Knabner, Peter
2015-04-01
The probability density function (PDF) of the chemical species concentrations transported in random environments is governed by unclosed evolution equations. The PDF is transported in the physical space by drift and diffusion processes described by coefficients derived by standard upscaling procedures. Its transport in the concentration space is described by a drift determined by reaction rates, in a closed form, as well as a term accounting for the sub-grid mixing process due to molecular diffusion and local scale hydrodynamic dispersion. Sub-grid mixing processes are usually described by models of the conditionally averaged diffusion flux or models of the conditional dissipation rate. We show that in certain situations mixing terms can also be derived, in the form of an Itô process, from simulated or measured concentration time series. Monte Carlo solutions to PDF evolution equations are usually constructed with systems of computational particles, which are well suited for highly dimensional advection-dominated problems. Such solutions require the fulfillment of specific consistency conditions relating the statistics of the random concentration field, function of both space and time, to that of the time random function describing an Itô process in physical and concentration spaces which governs the evolution of the system of particles. We show that the solution of the Fokker-Planck equation for the concentration-position PDF of the Itô process coincides with the solution of the PDF equation only for constant density flows in spatially statistically homogeneous systems. We also find that the solution of the Fokker-Planck equation is still equivalent to the solution of the PDF equation weighted by the variable density or by other conserved scalars. We illustrate the parameterization of the sub-grid mixing by time series and the Monte Carlo solution for a problem of contaminant transport in groundwater. The evolution of the system of computational particles whose
NASA Astrophysics Data System (ADS)
Stover, John C.
1991-12-01
Optical scatter is a bothersome source of optical noise, limits resolution and reduces system throughput. However, it is also an extremely sensitive metrology tool. It is employed in a wide variety of applications in the optics industry (where direct scatter measurement is of concern) and is becoming a popular indirect measurement in other industries where its measurement in some form is an indicator of another component property - like roughness, contamination or position. This paper presents a brief review of the current state of this technology as it emerges from university and government laboratories into more general industry use. The bidirectional scatter distribution function (or BSDF) has become the common format for expressing scatter data and is now used almost universally. Measurements made at dozens of laboratories around the country cover the spectrum from the uv to the mid- IR. Data analysis of optical component scatter has progressed to the point where a variety of analysis tools are becoming available for discriminating between the various sources of scatter. Work has progressed on the analysis of rough surface scatter and the application of these techniques to some challenging problems outside the optical industry. Scatter metrology is acquiring standards and formal test procedures. The available scatter data base is rapidly expanding as the number and sophistication of measurement facilities increases. Scatter from contaminants is continuing to be a major area of work as scatterometers appear in vacuum chambers at various laboratories across the country. Another area of research driven by space applications is understanding the non-topographic sources of mid-IR scatter that are associated with Beryllium and other materials. The current flurry of work in this growing area of metrology can be expected to continue for several more years and to further expand to applications in other industries.
Gao, Weigang; Wesely, M.L.
1994-01-01
The removal of gaseous substances from the atmosphere by dry deposition represents an important sink in the atmospheric budget for many trace gases. The surface removal rate, therefore, needs be described quantitatively in modeling atmospheric transport and chemistry with regional- and global-scale models. Because the uptake capability of a terrestrial surface is strongly influenced by the type and condition of its vegetation, the seasonal and spatial changes in vegetation should be described in considerable detail in large-scale models. The objective of the present study is to develop a model that links remote sensing data from satellites with the RADM dry deposition module to provide a parameterization of dry deposition over large scales with improved temporal and spatial coverage. This paper briefly discusses the modeling methods and initial results obtained by applying the improved dry deposition module to a tallgrass prairie, for which measurements of O{sub 3} dry deposition and simultaneously obtained satellite remote sensing data are available.
Purely bianisotropic scatterers
NASA Astrophysics Data System (ADS)
Albooyeh, M.; Asadchy, V. S.; Alaee, R.; Hashemi, S. M.; Yazdi, M.; Mirmoosa, M. S.; Rockstuhl, C.; Simovski, C. R.; Tretyakov, S. A.
2016-12-01
The polarization response of molecules or meta-atoms to external electric and magnetic fields, which defines the electromagnetic properties of materials, can either be direct (electric field induces electric moment and magnetic field induces magnetic moment) or indirect (magnetoelectric coupling in bianisotropic scatterers). Earlier studies suggest that there is a fundamental bound on the indirect response of all passive scatterers: It is believed to be always weaker than the direct one. In this paper, we prove that there exist scatterers which overcome this bound substantially. Moreover, we show that the amplitudes of electric and magnetic polarizabilities can be negligibly small as compared to the magnetoelectric coupling coefficients. However, we prove that if at least one of the direct-excitation coefficients vanishes, magnetoelectric coupling effects in passive scatterers cannot exist. Our findings open a way to a new class of electromagnetic scatterers and composite materials.
Inelastic Light Scattering Processes
NASA Technical Reports Server (NTRS)
Fouche, Daniel G.; Chang, Richard K.
1973-01-01
Five different inelastic light scattering processes will be denoted by, ordinary Raman scattering (ORS), resonance Raman scattering (RRS), off-resonance fluorescence (ORF), resonance fluorescence (RF), and broad fluorescence (BF). A distinction between fluorescence (including ORF and RF) and Raman scattering (including ORS and RRS) will be made in terms of the number of intermediate molecular states which contribute significantly to the scattered amplitude, and not in terms of excited state lifetimes or virtual versus real processes. The theory of these processes will be reviewed, including the effects of pressure, laser wavelength, and laser spectral distribution on the scattered intensity. The application of these processes to the remote sensing of atmospheric pollutants will be discussed briefly. It will be pointed out that the poor sensitivity of the ORS technique cannot be increased by going toward resonance without also compromising the advantages it has over the RF technique. Experimental results on inelastic light scattering from I(sub 2) vapor will be presented. As a single longitudinal mode 5145 A argon-ion laser line was tuned away from an I(sub 2) absorption line, the scattering was observed to change from RF to ORF. The basis, of the distinction is the different pressure dependence of the scattered intensity. Nearly three orders of magnitude enhancement of the scattered intensity was measured in going from ORF to RF. Forty-seven overtones were observed and their relative intensities measured. The ORF cross section of I(sub 2) compared to the ORS cross section of N2 was found to be 3 x 10(exp 6), with I(sub 2) at its room temperature vapor pressure.
NASA Astrophysics Data System (ADS)
Wiston, Modise; McFiggans, Gordon; Schultz, David
2015-04-01
In this study, we perform a simulation of the spatial distributions of particle and gas concentrations from a significantly large source of pollution event during a dry season in southern Africa and their interactions with cloud processes. Specific focus is on the extent to which cloud-aerosol interactions are affected by various inputs (i.e. emissions) and parameterizations and feedback mechanisms in a coupled mesoscale chemistry-meteorology model -herein Weather Research and Forecasting model with chemistry (WRF-Chem). The southern African dry season (May-Sep) is characterised by biomass burning (BB) type of pollution. During this period, BB particles are frequently observed over the subcontinent, at the same time a persistent deck of stratocumulus covers the south West African coast, favouring long-range transport over the Atlantic Ocean of aerosols above clouds. While anthropogenic pollutants tend to spread more over the entire domain, biomass pollutants are concentrated around the burning areas, especially the savannah and tropical rainforest of the Congo Basin. BB is linked to agricultural practice at latitudes south of 10° N. During an intense burning event, there is a clear signal of strong interactions of aerosols and cloud microphysics. These species interfere with the radiative budget, and directly affect the amount of solar radiation reflected and scattered back to space and partly absorbed by the atmosphere. Aerosols also affect cloud microphysics by acting as cloud condensation nuclei (CCN), modifying precipitation pattern and the cloud albedo. Key area is to understand the role of pollution on convective cloud processes and its impacts on cloud dynamics. The hypothesis is that an environment of potentially high pollution enables the probability of interactions between co-located aerosols and cloud layers. To investigate this hypothesis, we outline an approach to integrate three elements: i) focusing on regime(s) where there are strong indications of
A Fast Radiative Transfer Parameterization Under Cloudy Condition in Solar Spectral Region
NASA Astrophysics Data System (ADS)
Yang, Q.; Liu, X.; Yang, P.; Wang, C.
2014-12-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) system, which is proposed and developed by NASA, will directly measure the Earth's thermal infrared spectrum (IR), the spectrum of solar radiation reflected by the Earth and its atmosphere (RS), and radio occultation (RO). IR, RS, and RO measurements provide information on the most critical but least understood climate forcings, responses, and feedbacks associated with the vertical distribution of atmospheric temperature and water vapor, broadband reflected and emitted radiative fluxes, cloud properties, surface albedo, and surface skin temperature. To perform Observing System Simulation Experiments (OSSE) for long term climate observations, accurate and fast radiative transfer models are needed. The principal component-based radiative transfer model (PCRTM) is one of the efforts devoted to the development of fast radiative transfer models for simulating radiances and reflecatance observed by various hyperspectral instruments. Retrieval algorithm based on PCRTM forward model has been developed for AIRS, NAST, IASI, and CrIS. It is very fast and very accurate relative to the training radiative transfer model. In this work, we are extending PCRTM to UV-VIS-near IR spectral region. To implement faster cloudy radiative transfer calculations, we carefully investigated the radiative transfer process under cloud condition. The cloud bidirectional reflectance was parameterized based on off-line 36-stream multiple scattering calculations while few other lookup tables were generated to describe the effective transmittance and reflectance of the cloud-clear-sky coupling system in solar spectral region. The bidirectional reflectance or the irradiance measured by satellite may be calculated using a simple fast radiative transfer model providing the type of cloud (ice or water), optical depth of the cloud, optical depth of both atmospheric trace gases above and below clouds, particle size of the cloud, as well
NASA Astrophysics Data System (ADS)
Decloedt, Thomas; Luther, Douglas S.
2012-11-01
The spatial distributions of the diapycnal diffusivity predicted by two abyssal mixing schemes are compared to each other and to observational estimates based on microstructure surveys and large-scale hydrographic inversions. The parameterizations considered are the tidal mixing scheme by Jayne, St. Laurent and co-authors (JSL01) and the Roughness Diffusivity Model (RDM) by Decloedt and Luther. Comparison to microstructure surveys shows that both parameterizations are conservative in estimating the vertical extent to which bottom-intensified mixing penetrates into the stratified water column. In particular, the JSL01 exponential vertical structure function with fixed scale height decays to background values much nearer topography than observed. JSL01 and RDM yield dramatically different horizontal spatial distributions of diapycnal diffusivity, which would lead to quite different circulations in OGCMs, yet they produce similar basin-averaged diffusivity profiles. Both parameterizations are shown to yield smaller basin-mean diffusivity profiles than hydrographic inverse estimates for the major ocean basins, by factors ranging from 3 up to over an order of magnitude. The canonical 10-4 m2 s-1abyssal diffusivity is reached by the parameterizations only at depths below 3 km. Power consumption by diapycnal mixing below 1 km of depth, between roughly 32°S and 48°N, for the RDM and JSL01 parameterizations is 0.40 TW & 0.28 TW, respectively. The results presented here suggest that present-day mixing parameterizations significantly underestimate abyssal mixing. In conjunction with other recently published studies, a plausible interpretation is that parameterizing the dissipation of bottom-generated internal waves is not sufficient to approximate the global spatial distribution of diapycnal mixing in the abyssal ocean.
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a
Parameterization of air sea gas fluxes at extreme wind speeds
NASA Astrophysics Data System (ADS)
McNeil, Craig; D'Asaro, Eric
2007-06-01
Hurricane Frances data set. Although all the model parameters cannot be determined uniquely, some features are clear. The fluxes due to the surface equilibration terms, estimated both from data and from model inversions, increase rapidly at high wind speed but are still far below those predicted using the cubic parameterization of Wanninkhof and McGillis [Wannikhof, R. and McGillis, W.R., 1999. A cubic relationship between air-sea CO 2 exchange and wind speed. Geophysical Research Letters, 26:1889-1892.] at high wind speed. The fluxes due to gas injection terms increase with wind speed even more rapidly, causing bubble injection to dominate at the highest wind speeds.
NASA Astrophysics Data System (ADS)
Titos, G.; Cazorla, A.; Zieger, P.; Andrews, E.; Lyamani, H.; Granados-Muñoz, M. J.; Olmo, F. J.; Alados-Arboledas, L.
2016-09-01
Knowledge of the scattering enhancement factor, f(RH), is important for an accurate description of direct aerosol radiative forcing. This factor is defined as the ratio between the scattering coefficient at enhanced relative humidity, RH, to a reference (dry) scattering coefficient. Here, we review the different experimental designs used to measure the scattering coefficient at dry and humidified conditions as well as the procedures followed to analyze the measurements. Several empirical parameterizations for the relationship between f(RH) and RH have been proposed in the literature. These parameterizations have been reviewed and tested using experimental data representative of different hygroscopic growth behavior and a new parameterization is presented. The potential sources of error in f(RH) are discussed. A Monte Carlo method is used to investigate the overall measurement uncertainty, which is found to be around 20-40% for moderately hygroscopic aerosols. The main factors contributing to this uncertainty are the uncertainty in RH measurement, the dry reference state and the nephelometer uncertainty. A literature survey of nephelometry-based f(RH) measurements is presented as a function of aerosol type. In general, the highest f(RH) values were measured in clean marine environments, with pollution having a major influence on f(RH). Dust aerosol tended to have the lowest reported hygroscopicity of any of the aerosol types studied. Major open questions and suggestions for future research priorities are outlined.
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan
2012-06-11
We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.
Albedo of coastal landfast sea ice in Prydz Bay, Antarctica: Observations and parameterization
NASA Astrophysics Data System (ADS)
Yang, Qinghua; Liu, Jiping; Leppäranta, Matti; Sun, Qizhen; Li, Rongbin; Zhang, Lin; Jung, Thomas; Lei, Ruibo; Zhang, Zhanhai; Li, Ming; Zhao, Jiechen; Cheng, Jingjing
2016-05-01
The snow/sea-ice albedo was measured over coastal landfast sea ice in Prydz Bay, East Antarctica (off Zhongshan Station) during the austral spring and summer of 2010 and 2011. The variation of the observed albedo was a combination of a gradual seasonal transition from spring to summer and abrupt changes resulting from synoptic events, including snowfall, blowing snow, and overcast skies. The measured albedo ranged from 0.94 over thick fresh snow to 0.36 over melting sea ice. It was found that snow thickness was the most important factor influencing the albedo variation, while synoptic events and overcast skies could increase the albedo by about 0.18 and 0.06, respectively. The in-situ measured albedo and related physical parameters (e.g., snow thickness, ice thickness, surface temperature, and air temperature) were then used to evaluate four different snow/ice albedo parameterizations used in a variety of climate models. The parameterized albedos showed substantial discrepancies compared to the observed albedo, particularly during the summer melt period, even though more complex parameterizations yielded more realistic variations than simple ones. A modified parameterization was developed, which further considered synoptic events, cloud cover, and the local landfast sea-ice surface characteristics. The resulting parameterized albedo showed very good agreement with the observed albedo.
Incorporation of a Gravity Wave Momentum Deposition Parameterization into the VTGCM
NASA Astrophysics Data System (ADS)
Brecht, A. S.; Zalucha, A. M.; Bougher, S. W.; Rafkin, S. C.; Alexander, M.
2011-12-01
The National Center for Atmospheric Research (NCAR) thermospheric general circulation model for Venus (VTGCM) is a three dimensional model that can calculate temperatures, zonal winds, meridional winds, vertical winds, and concentration of specific species. The calculated nightside warm region (near ~100 km) and the O2-IR and NO-UV nightglow intensity distributions have been produced to represent mean conditions observed by Venus Express data and ground-based observations with the use of Rayleigh friction (Brecht et al. JGR, 2011). Rayleigh friction is implemented to parameterize gravity wave momentum drag effects on the global mean zonal wind flow. The purpose is to obtain a first order approximation of the necessary drag to reproduce observations. In addition, Rayleigh friction provides guidelines for the implementation and adjustment of a gravity wave momentum deposition scheme. Most recently, the Alexander and Dunkerton (AMS, 1999) gravity wave momentum parameterization has been incorporated into the VTGCM. The parameterization is designed to deposit momentum fluxes locally and totally at the altitude of wave breaking. Further, it allows waves to continue to propagate above the breaking altitude. Specific fields will be shown to illustrate the impacts the parameterization has on the global circulation (i.e. temperatures, zonal winds, and night airglow distributions (O2 IR and NO UV)). In addition, the chosen values for parameters will be discussed and their importance for depositing the gravity wave momentum. The gravity wave momentum parameterization launches waves from the cloud region within the VTGCM and provides a strong source for asymmetrical global winds.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentially unrelatedmore » to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
Parameterized signal calibration for NMR cryoporometry experiment without external standard
NASA Astrophysics Data System (ADS)
Stoch, Grzegorz; Krzyżak, Artur T.
2016-08-01
In cryoporometric experiments non-linear effects associated with the sample and the probehead bring unwanted contributions to the total signal along with the change of temperature. The elimination of these influences often occurs with the help of an intermediate measurement of a separate liquid sample. In this paper we suggest an alternative approach under certain assumptions, solely based on data from the target experiment. In order to obtain calibration parameters the method uses all of these raw data points. Its reliability is therefore enhanced as compared to other methods based on lesser number of data points. Presented approach is automatically valid for desired temperature range. The need for intermediate measurement is removed and parameters for such a calibration are naturally adapted to the individual sample-probehead combination.
Cloud forcing in Arctic polynyas: Climatology, parameterization, and modeling
NASA Astrophysics Data System (ADS)
Key, Erica
Cloud and radiation data gathered in four polynyas across the Western Arctic span a decade of extreme environmental variability that culminated in the furthest retreat of sea ice cover on satellite record. These polynyas, oases of open water within the pack ice, are areas of intense surface exchange and serve as small-scale natural models of all active polar processes. Each of the studied polynyas is uniquely forced and maintained, resulting in an ensemble which representatively samples pan-Arctic variability. Cloud amount in each polynya, as analyzed to WMO standards by a meteorologist from time-lapse imagery collected using a hemispheric mirror, exceeded previous observational estimates of 80%. Calculations of surface cloud radiative forcing point to Arctic clouds' tendency toward scattering incoming shortwave radiation over re-emission of radiation in the longwave from cloud base. Sensitivity of this cloud forcing to variations in albedo, aerosol loading, and cloud microphysics, calculated with a polar-optimized radiative transfer model, indicate that small changes in snow and ice cover elicit stronger responses than heavy aerosol loading, changing particle effective radius, or liquid water content, especially at small solar zenith angles. Results obtained locally within polynyas are given regional relevance through the use of CASPR (Cloud and Surface Parameter Retrieval) algorithms and AVHRR Polar Pathfinder data.
NON-GAUSSIAN SCATTER IN CLUSTER SCALING RELATIONS
Shaw, Laurie D.; Holder, Gilbert P.; Dudley, Jonathan
2010-06-10
We investigate the impact of non-Gaussian scatter in the cluster mass-observable scaling relation on the mass and redshift distribution of clusters detected by wide area surveys. We parameterize non-Gaussian scatter by incorporating the third and fourth moments (skewness and kurtosis) into the distribution P(M {sub obs}|M). We demonstrate that the effect of the higher order moments becomes important when the product of the standard deviation of P(M {sub obs}|M) and the slope of the mass function is greater than unity. For high scatter mass indicators it is therefore necessary for the survey, limiting mass threshold to be less than 10{sup 14} h {sup -1} M {sub sun}, to prevent the skewness from having a significant impact on the observed number counts, particularly at high redshift. We also show that an unknown level of non-Gaussianity in the scatter is equivalent to an additional uncertainty on the variance in P(M {sub obs}|M) and thus may limit the constraints that can be placed on {sigma}{sub 8} and the dark energy equation of state parameter w.
NASA Astrophysics Data System (ADS)
Kubo, S.; Nishiura, M.; Tanaka, K.; Moseev, D.; Ogasawara, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Tsujimura, T. I.; Makino, R.
2016-06-01
High-power gyrotrons prepared for the electron cyclotron heating at 77 GHz has been used for a collective Thomson scattering (CTS) study in LHD. Due to the difficulty in removing fundamental and/or second harmonic resonance in the viewing line of sight, the subtraction of the background ECE from measured signal was performed by modulating the probe beam power from a gyrotron. The separation of the scattering component from the background has been performed successfully taking into account the response time difference between both high-energy and bulk components. The other separation was attempted by fast scanning the viewing beam across the probing beam. It is found that the intensity of the scattered spectrum corresponding to the bulk and high energy components were almost proportional to the calculated scattering volume in the relatively low density region, while appreciable background scattered component remains even in the off volume in some high density cases. The ray-trace code TRAVIS is used to estimate the change in the scattering volume due to probing and receiving beam deflection effect.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
A discrete variable representation for electron-hydrogen atom scattering
Gaucher, Lionel Francis
1994-08-01
A discrete variable representation (DVR) suitable for treating the quantum scattering of a low energy electron from a hydrogen atom is presented. The benefits of DVR techniques (e.g. the removal of the requirement of calculating multidimensional potential energy matrix elements and the availability of iterative sparse matrix diagonalization/inversion algorithms) have for many years been applied successfully to studies of quantum molecular scattering. Unfortunately, the presence of a Coulomb singularity at the electrically unshielded center of a hydrogen atom requires high radial grid point densities in this region of the scattering coordinate, while the presence of finite kinetic energy in the asymptotic scattering electron also requires a sufficiently large radial grid point density at moderate distances from the nucleus. The constraints imposed by these two length scales have made application of current DVR methods to this scattering event difficult.
Environment scattering in GADRAS.
Thoreson, Gregory G.; Mitchell, Dean J; Theisen, Lisa Anne; Harding, Lee T.
2013-09-01
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Greens Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
Rayleigh Scattering Diagnostics Workshop
NASA Technical Reports Server (NTRS)
Seasholtz, Richard (Compiler)
1996-01-01
The Rayleigh Scattering Diagnostics Workshop was held July 25-26, 1995 at the NASA Lewis Research Center in Cleveland, Ohio. The purpose of the workshop was to foster timely exchange of information and expertise acquired by researchers and users of laser based Rayleigh scattering diagnostics for aerospace flow facilities and other applications. This Conference Publication includes the 12 technical presentations and transcriptions of the two panel discussions. The first panel was made up of 'users' of optical diagnostics, mainly in aerospace test facilities, and its purpose was to assess areas of potential applications of Rayleigh scattering diagnostics. The second panel was made up of active researchers in Rayleigh scattering diagnostics, and its purpose was to discuss the direction of future work.
NASA Technical Reports Server (NTRS)
Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.
1990-01-01
A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.
CONTINUOUS ROTATION SCATTERING CHAMBER
Verba, J.W.; Hawrylak, R.A.
1963-08-01
An evacuated scattering chamber for use in observing nuclear reaction products produced therein over a wide range of scattering angles from an incoming horizontal beam that bombards a target in the chamber is described. A helically moving member that couples the chamber to a detector permits a rapid and broad change of observation angles without breaching the vacuum in the chamber. Also, small inlet and outlet openings are provided whose size remains substantially constant. (auth)
Microcavity Enhanced Raman Scattering
NASA Astrophysics Data System (ADS)
Petrak, Benjamin J.
Raman scattering can accurately identify molecules by their intrinsic vibrational frequencies, but its notoriously weak scattering efficiency for gases presents a major obstacle to its practical application in gas sensing and analysis. This work explores the use of high finesse (≈50 000) Fabry-Perot microcavities as a means to enhance Raman scattering from gases. A recently demonstrated laser ablation method, which carves out a micromirror template on fused silica--either on a fiber tip or bulk substrates-- was implemented, characterized, and optimized to fabricate concave micromirror templates ˜10 mum diameter and radius of curvature. The fabricated templates were coated with a high-reflectivity dielectric coating by ion-beam sputtering and were assembled into microcavities ˜10 mum long and with a mode volume ˜100 mum 3. A novel gas sensing technique that we refer to as Purcell enhanced Raman scattering (PERS) was demonstrated using the assembled microcavities. PERS works by enhancing the pump laser's intensity through resonant recirculation at one longitudinal mode, while simultaneously, at a second mode at the Stokes frequency, the Purcell effect increases the rate of spontaneous Raman scattering by a change to the intra-cavity photon density of states. PERS was shown to enhance the rate of spontaneous Raman scattering by a factor of 107 compared to the same volume of sample gas in free space scattered into the same solid angle subtended by the cavity. PERS was also shown capable of resolving several Raman bands from different isotopes of CO2 gas for application to isotopic analysis. Finally, the use of the microcavity to enhance coherent anti-Stokes Raman scattering (CARS) from CO2 gas was demonstrated.
NASA Astrophysics Data System (ADS)
Suselj, K.; Suzuki, K.; Teixeira, J.
2014-12-01
A new mixing parameterization for climate and weather prediction models is developed. The new parameterization represents boundary layer mixing, and non-precipitating and precipitating convection processes in a unified and physically consistent manner. The parameterization builds on a previously tested stochastic multiple-plume eddy-diffusivity/mass-flux (EDMF) approach. The new parameterization includes a realistic model for microphysical processes as part of the mass-flux parameterization, and a parameterization for convective downdrafts. A method to solve the mass-flux dynamics and microphysics simultaneously is developed. This method avoids the need for an iterative solution of the equations and is numerically stable. The new EDMF parameterization is implemented in a single-column model (SCM) and we show that the model is able to capture essential features of moist boundary layers, ranging from the stratocumulus, shallow and precipitating cumulus regimes. Detailed comparisons of a few important cases with LES results are shown to confirm the robustness of the present approach. This new parameterization provides an important step towards a fully unified parameterization of boundary layer, shallow and deep convection.
The Parameterization of Solid Metal-Liquid Metal Partitioning of Siderophile Elements
NASA Technical Reports Server (NTRS)
Chabot, N. L.; Jones, J. H.
2003-01-01
The composition of a metallic liquid can significantly affect the partitioning behavior of elements. For example, some experimental solid metal-liquid metal partition coefficients have been shown to increase by three orders of magnitude with increasing S-content of the metallic liquid. Along with S, the presence of other light elements, such as P and C, has also been demonstrated to affect trace element partitioning behavior. Understanding the effects of metallic composition on partitioning behavior is important for modeling the crystallization of magmatic iron meteorites and the chemical effects of planetary differentiation. It is thus useful to have a mathematical expression that parameterizes the partition coefficient as a function of the composition of the metal. Here we present a revised parameterization method, which builds on the theory of the current parameterization of Jones and Malvin and which better handles partitioning in multi-light-element systems.
NASA Astrophysics Data System (ADS)
Grell, Georg A.; Dévényi, Dezső
2002-07-01
A new convective parameterization is introduced that can make use of a large variety of assumptions previously introduced in earlier formulations. The assumptions are chosen so that they will generate a large spread in the solution. We then show two methods in which ensemble and data assimilation techniques may be used to find the best value to feed back to the larger scale model. First, we can use simple statistical methods to find the most probable solution. Second, the ensemble probability density function can be considered as an appropriate ``prior'' (a'priori density) for Bayesian data assimilation. Using this ``prior'', and information about observation likelihood, measured meteorological or climatological data can be directly assimilated into model fields. Given proper observations, the application of this technique is not restricted to convective parameterizations, but may be applied to other parameterizations as well.
A numerical method for parameterization of atmospheric chemistry - Computation of tropospheric OH
NASA Technical Reports Server (NTRS)
Spivakovsky, C. M.; Wofsy, S. C.; Prather, M. J.
1990-01-01
An efficient and stable computational scheme for parameterization of atmospheric chemistry is described. The 24-hour-average concentration of OH is represented as a set of high-order polynomials in variables such as temperature, densities of H2O, CO, O3, and NO(t) (defined as NO + NO2 + NO3 + 2N2O5 + HNO2 + HNO4) as well as variables determining solar irradiance: cloud cover, density of the overhead ozone column, surface albedo, latitude, and solar declination. This parameterization of OH chemistry was used in the three-dimensional study of global distribution of CH3CCl3. The proposed computational scheme can be used for parameterization of rates of chemical production and loss or of any other output of a full chemical model.
Walcek, C.J.
1992-10-30
This research program utilizes satellite and surface-derived cloud observations together with standard meteorological measurements to evaluate and improve our ability to accurately diagnose cloud coverage. Results are to be used to compliment existing or future parameterizations of cloud effects in general circulation models, since nearly all cloud parameterizations must specify a fractional area of cloud coverage when calculating radiative or dynamic cloud effects, and current parameterizations rely on rather crude cloud cover estimates. We have compiled and reviewed a list of formulations used by various climate research groups to specify cloud cover. We find considerable variability between formulations used by various climate and meteorology models, and under some conditions, one formulation will produce a zero cloud amount, while an alternate formulation calculates 95% cloud cover under the same environmental conditions. All formulations hypothesize that cloud cover is predominantly determined by the average relative humidity, although some formulations allow local temperature lapse rates and vertical velocities to influence cloud amount.
A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"
NASA Astrophysics Data System (ADS)
Jansen, Malte F.
2017-02-01
This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
NASA Technical Reports Server (NTRS)
Suarez, M. J.; Arakawa, A.; Randall, D. A.
1983-01-01
A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
A review of recent research on improvement of physical parameterizations in the GLA GCM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
Organic Aerosol Volatility Parameterizations and Their Impact on Atmospheric Composition and Climate
NASA Technical Reports Server (NTRS)
Tsigaridis, Kostas; Bauer, Susanne E.
2015-01-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions. Aerosol microphysics do not significantly alter the mean OA vertical profile or comparison with surface measurements. This might not be the case for semi-volatile OA with microphysics.
Organic Aerosol Volatility Parameterizations and Their Impact on Atmospheric Composition and Climate
NASA Technical Reports Server (NTRS)
Tsigaridis, Konsta; Bauer, Susanne E.
2015-01-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions.
Impact of a scale-aware cumulus parameterization in an operational NWP system modeling system
NASA Astrophysics Data System (ADS)
Chen, Baode; Yang, Yuhua; Wang, Xiaofeng
2014-05-01
To better understand the behavior of convective schemes across the grey zone, we carried out one-month (July of 2013) realtime-like experiment with an operational NWP system modeling system which includes the ADAS data assimilation scheme and WRF forecast model. The Grell-Freitas cumulus parameterization scheme, which is a scale-aware convective parameterization scheme and has been developed to better handle the transition in behavior of the sub-grid scale convective processes through the grey zone, was used in different resolution (15km, 9km and 3km) model set-up. Subjective and quantitative evaluations of the forecasts were conducted and the skills of the different experimental forecasts relatively to existing forecasting guidance were compared. A summary of the preliminary findings about the proportion of resolved vs unresolved physical processes in the gray zone will be presented along with a discussion of the potential operational impacts of the cumulus parameterization.
NASA Astrophysics Data System (ADS)
Hassane, Mamadou Maina F. Z.; Ackerer, P.
2017-02-01
In the context of parameter identification by inverse methods, an optimized adaptive downscaling parameterization is described in this work. The adaptive downscaling parameterization consists of (i) defining a parameter mesh for each parameter, independent of the flow model mesh, (ii) optimizing the parameters set related to the parameter mesh, and (iii) if the match between observed and computed heads is not accurate enough, creating a new parameter mesh via refinement (downscaling) and performing a new optimization of the parameters. Refinement and coarsening indicators are defined to optimize the parameter mesh refinement. The robustness of the refinement and coarsening indicators was tested by comparing the results of inversions using refinement without indicators, refinement with only refinement indicators and refinement with coarsening and refinement indicators. These examples showed that the indicators significantly reduce the number of degrees of freedom necessary to solve the inverse problem without a loss of accuracy. They, therefore, limit over-parameterization.
Parameterization of grounding-line cliff failure in an Antarctic ice sheet model
NASA Astrophysics Data System (ADS)
Pollard, David; DeConto, Robert
2014-05-01
Two mechanisms have recently been added to a 3-D ice-sheet model that can produce drastic retreat into East Antarctic sub-glacial basins during past warm periods, as implied by (albeit uncertain) geologic evidence. The two mechanisms, (1) structural failure of large tidewater cliffs, and (2) enhanced ice-shelf calving due to meltwater draining into crevasses, present challenges in their parameterization within coarse-grid models. Here we describe details and choices in the parameterization of structural failure at deep grounding lines, its incorporation into the large-scale dynamical equations, and the sensitivity of model results to these choices. In addition, a parameterization of melt-enhanced calving is described, along with a simple representation of the clogging effects of ice melange in narrow seaways, and their effects on Antarctic simulations.
Parameterization of Forest Canopies with the PROSAIL Model
NASA Astrophysics Data System (ADS)
Austerberry, M. J.; Grigsby, S.; Ustin, S.
2013-12-01
Particularly in forested environments, arboreal characteristics such as Leaf Area Index (LAI) and Leaf Inclination Angle have a large impact on the spectral characteristics of reflected radiation. The reflected spectrum can be measured directly with satellites or airborne instruments, including the MASTER and AVIRIS instruments. This particular project dealt with spectral analysis of reflected light as measured by AVIRIS compared to tree measurements taken from the ground. Chemical properties of leaves including pigment concentrations and moisture levels were also measured. The leaf data was combined with the chemical properties of three separate trees, and served as input data for a sequence of simulations with the PROSAIL Model, a combination of PROSPECT and Scattering by Arbitrarily Inclined Leaves (SAIL) simulations. The output was a computed reflectivity spectrum, which corresponded to the spectra that were directly measured by AVIRIS for the three trees' exact locations within a 34-meter pixel resolution. The input data that produced the best-correlating spectral output was then cross-referenced with LAI values that had been obtained through two entirely separate methods, NDVI extraction and use of the Beer-Lambert law with airborne LiDAR. Examination with regressive techniques between the measured and modeled spectra then enabled a determination of the trees' probable structure and leaf parameters. Highly-correlated spectral output corresponded well to specific values of LAI and Leaf Inclination Angle. Interestingly, it appears that varying Leaf Angle Distribution has little or no noticeable effect on the PROSAIL model. Not only is the effectiveness and accuracy of the PROSAIL model evaluated, but this project is a precursor to direct measurement of vegetative indices exclusively from airborne or satellite observation.
Dam removal increases American eel abundance in distant headwater streams
Hitt, Nathaniel P.; Eyler, Sheila; Wofford, John E.B.
2012-01-01
American eel Anguilla rostrata abundances have undergone significant declines over the last 50 years, and migration barriers have been recognized as a contributing cause. We evaluated eel abundances in headwater streams of Shenandoah National Park, Virginia, to compare sites before and after the removal of a large downstream dam in 2004 (Embrey Dam, Rappahannock River). Eel abundances in headwater streams increased significantly after the removal of Embrey Dam. Observed eel abundances after dam removal exceeded predictions derived from autoregressive models parameterized with data prior to dam removal. Mann–Kendall analyses also revealed consistent increases in eel abundances from 2004 to 2010 but inconsistent temporal trends before dam removal. Increasing eel numbers could not be attributed to changes in local physical habitat (i.e., mean stream depth or substrate size) or regional population dynamics (i.e., abundances in Maryland streams or Virginia estuaries). Dam removal was associated with decreasing minimum eel lengths in headwater streams, suggesting that the dam previously impeded migration of many small-bodied individuals (<300 mm TL). We hypothesize that restoring connectivity to headwater streams could increase eel population growth rates by increasing female eel numbers and fecundity. This study demonstrated that dams may influence eel abundances in headwater streams up to 150 river kilometers distant, and that dam removal may provide benefits for eel management and conservation at the landscape scale.
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has
Engelmann spruce site index models: a comparison of model functions and parameterizations.
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce - Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike's Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements.
Saa, Pedro; Nielsen, Lars K
2015-04-01
Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
NASA Astrophysics Data System (ADS)
Haus, R.; Kappel, D.; Arnold, G.
2017-03-01
Thermal cooling rates QC and solar heating rates QH in the atmosphere of Venus at altitudes between 0 and 100 km are investigated using the radiative transfer and radiative balance simulation techniques described by Haus et al. (2015b, 2016). QC strongly responds to temperature profile and cloud parameter changes, while QH is less sensitive to these parameters. The latter mainly depends on solar insolation conditions and the unknown UV absorber distribution. A parameterization approach is developed that permits a fast and reliable calculation of temperature change rates Q for different atmospheric model parameters and that can be applied in General Circulation Models to investigate atmospheric dynamics. A separation of temperature, cloud parameter, and unknown UV absorber influences is performed. The temperature response parameterization relies on a specific altitude and latitude-dependent cloud model. It is based on an algorithm that characterizes Q responses to a broad range of temperature perturbations at each level of the atmosphere using the Venus International Reference Atmosphere (VIRA) as basis temperature model. The cloud response parameterization considers different temperature conditions and a range of individual cloud mode factors that additionally change cloud optical depths as determined by the initial latitude-dependent model. A QH response parameterization for abundance changes of the unknown UV absorber is also included. Deviations between accurate calculation and parameterization results are in the order of a few tenths of K/day at altitudes below 90 km. The parameterization approach is used to investigate atmospheric radiative equilibrium (RE) conditions. Polar mesospheric RE temperatures above the cloud top are up to 70 K lower and equatorial temperatures up to 10 K higher than observed values. This radiative forcing field is balanced by dynamical processes that maintain the observed thermal structure.
Effective Tree Scattering at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; ONeill, Peggy E.; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.
2011-01-01
For routine microwave Soil Moisture (SM) retrieval through vegetation, the tau-omega [1] model [zero-order Radiative Transfer (RT) solution] is attractive due to its simplicity and eases of inversion and implementation. It is the model used in baseline retrieval algorithms for several planned microwave space missions, such as ESA's Soil Moisture Ocean Salinity (SMOS) mission (launched November 2009) and NASA's Soil Moisture Active Passive (SMAP) mission (to be launched 2014/2015) [2 and 3]. These approaches are adapted for vegetated landscapes with effective vegetation parameters tau and omega by fitting experimental data or simulation outputs of a multiple scattering model [4-7]. The model has been validated over grasslands, agricultural crops, and generally light to moderate vegetation. As the density of vegetation increases, sensitivity to the underlying SM begins to degrade significantly and errors in the retrieved SM increase accordingly. The zero-order model also loses its validity when dense vegetation (i.e. forest, mature corn, etc.) includes scatterers, such as branches and trunks (or stalks in the case of corn), which are large with respect to the wavelength. The tau-omega model (when applied over moderately to densely vegetated landscapes) will need modification (in terms of form or effective parameterization) to enable accurate characterization of vegetation parameters with respect to specific tree types, anisotropic canopy structure, presence of leaves and/or understory. More scattering terms (at least up to first-order at L-band) should be included in the RT solutions for forest canopies [8]. Although not really suitable to forests, a zero-order tau-omega model might be applied to such vegetation canopies with large scatterers, but that equivalent or effective parameters would have to be used [4]. This requires that the effective values (vegetation opacity and single scattering albedo) need to be evaluated (compared) with theoretical definitions of
NASA Astrophysics Data System (ADS)
Mihailovi, Dragutin T.; Rajkovi, Borivoj; Lali, Branislava; Deki, Ljiljana
1995-11-01
The correct simulation of the sensible and latent heat fluxes from a non-plant-covered surface is very important in designing the surface scheme for modeling the processes in the land air exchange. However, using different bare soil evaporation schemes in land surface parameterization, an error in partitioning the surface fluxes can be introduced.In parameterization of evaporation from a non-plant-covered surface in resistance representation, the and approaches are commonly used in corresponding formulas where the and are functions of soil water content. The performance of different schemes within these approaches is briefly discussed. For that purpose six schemes, based on different dependence or on volumetric soil moisture content and its saturated value, are used.The latent and sensible heat fluxes and the ground temperature outputs were obtained from the numerical tests using the foregoing schemes. The tests were based on time integrations by the bare soil parameterization scheme using real data. The datasets obtained over the experimental site in Rimski anevi, Yugoslavia, on chernozem soil were used.The obtained values of the latent and sensible heat fluxes and the ground temperature were compared with the observed values. Finally, their variability was considered using a simple root-mean-square analysis.
A parameterization for longwave surface radiation from sun-synchronous satellite data
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.
1989-01-01
A parameterization is presented for computing downward, upward, and net longwave radiation at the earth's surface using data from NOAA sun-synchronous satellites. The parameterization is applied to satellite soundings for April, 1982 over a large region of the tropical Pacific Ocean. Sensitivity studies were used to estimate the random and systematic errors in computed fluxes due to probable errors in TOVS-derived parameters. It is suggested that large biases in the results due to errors in TOVS-derived parameters may be corrected with data from the International Satellite Cloud Climatology Project.
Carbonic anhydrase binding site parameterization in OPLS-AA force field.
Bernadat, Guillaume; Supuran, Claudiu T; Iorga, Bogdan I
2013-03-15
The parameterization of carbonic anhydrase binding site in OPLS-AA force field was performed using quantum chemistry calculations. Both OH2 and OH(-) forms of the binding site were considered, showing important differences in terms of atomic partial charges. Three different parameterization protocols were used, and the results obtained highlighted the importance of including an extended binding site in the charge calculation. The force field parameters were subsequently validated using standard molecular dynamics simulations. The results presented in this work should greatly facilitate the use of molecular dynamics simulations for studying the carbonic anhydrase, and more generally, the metallo-enzymes.
Assessment of recent resolution and parameterization changes in the GLA fourth order GCM
NASA Technical Reports Server (NTRS)
Helfand, H. M.; Sud, Y. C.; Takacs, L. L.; Jusem, J. C.; Molod, A. M.
1988-01-01
The Goddard Laboratory for Atmospheres' fourth-order GCM is under evaluation for the impact on model integrations of enhanced horizontal and vertical resolution, as well as the effects of such novel parameterization schemes as that of gravity-wave-drag, the Arakawa-Schubert (1974) cumulus parameterization, and an explicitly-resolved planetary boundary layer. While the doubling of the GMC's horizontal resolution to 2 deg in latitude and 2.5 deg in longitude has improved the model's predictive skill for 6-7 day forecasts, systematic errors associated with the model's climate drift lead to a deterioration in predictions for longer forecasts.
RedMDStream: Parameterization and Simulation Toolbox for Coarse-Grained Molecular Dynamics Models
Leonarski, Filip; Trylska, Joanna
2015-01-01
Coarse-grained (CG) models in molecular dynamics (MD) are powerful tools to simulate the dynamics of large biomolecular systems on micro- to millisecond timescales. However, the CG model, potential energy terms, and parameters are typically not transferable between different molecules and problems. So parameterizing CG force fields, which is both tedious and time-consuming, is often necessary. We present RedMDStream, a software for developing, testing, and simulating biomolecules with CG MD models. Development includes an automatic procedure for the optimization of potential energy parameters based on metaheuristic methods. As an example we describe the parameterization of a simple CG MD model of an RNA hairpin. PMID:25902423
NASA Astrophysics Data System (ADS)
Wang, Yingjun; Benson, David J.
2016-12-01
In this paper, an approach based on the fast point-in-polygon (PIP) algorithm and trimmed elements is proposed for isogeometric topology optimization (TO) with arbitrary geometric constraints. The isogeometric parameterized level-set-based TO method, which directly uses the non-uniform rational basis splines (NURBS) for both level set function (LSF) parameterization and objective function calculation, provides higher accuracy and efficiency than previous methods. The integration of trimmed elements is completed by the efficient quadrature rule that can design the quadrature points and weights for arbitrary geometric shape. Numerical examples demonstrate the efficiency and flexibility of the method.
Xie, S; Cederwall, R T; Yio, J; Xu, K-M
2001-05-17
Parameterization of cumulus convection in general circulation model (GCM) has been recognized as one of the most important and complex issues in the model physical parameterizations. In earlier studies, most cumulus parameterizations were developed and evaluated using data observed over tropical oceans, such as the GATE (the Global Atmospheric Research Program's Atlantic Tropical Experiment) data. This is partly due to inadequate field measurements in the midlatitudes. In this study, we compare and evaluate a total of eight types of the state-of-the-art cumulus parameterizations used in fifteen Single-Column Models (SCM) under the summertime midlatitude continental conditions using the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) summer 1997 Intensive Operational Period (IOP) data, which covers several continental convection events. The purpose is to systematically compare and evaluate the performance of these cumulus parameterizations under summertime midlatitude continental conditions. Through the study we hope to identify strengths and weaknesses of these cumulus parameterizations that will lead to further improvements. Here, we briefly present our most interesting results. A full description of this study can be seen in Xie et al. (2001). The authors conclude that: (1) The SCM simulation errors are closely related to problems with model cumulus parameterizations. The schemes with triggering based on CAPE generally produce more active cumulus convection than the schemes with triggering based on local parcel buoyancy over land surface at midlatitudes since CAPE is usually large and is mainly determined by the strong solar diurnal heating there. The use of positive CAPE to trigger model convection can lead to an overestimation of convection during the daytime. This results in warmer/drier atmospheres in the former and cooler/more moist atmospheres in the latter. (2) A non-penetrative convection scheme can underestimate the depth of
Parameterization of the electron beam output factors of a 25-MeV linear accelerator
McParland, B.J.
1987-07-01
A new parameterization of the output factors of an electron beam has been developed. The output factors for the electron beams of an AECL Therac-25 have been determined for a variety of square and rectangular fields using ionization measurements and thermoluminescent dosimetry. The data were then least-squares fit by a semiempirical equation which treats the two field dimensions as variables. Such a parameterization allows computer-generated tables of output factors to be manufactured. The calculated values agree with the measured data in most cases to within the +- 1% experimental uncertainty. A comparison between this method of calculating output factors and two conventional methods is also presented.
A parameterization of the electron beam output factors of a 25-MeV linear accelerator.
McParland, B J
1987-01-01
A new parameterization of the output factors of an electron beam has been developed. The output factors for the electron beams of an AECL Therac-25 have been determined for a variety of square and rectangular fields using ionization measurements and thermoluminescent dosimetry. The data were then least-squares fit by a semiempirical equation which treats the two field dimensions as variables. Such a parameterization allows computer-generated tables of output factors to be manufactured. The calculated values agree with the measured data in most cases to within the +/- 1% experimental uncertainty. A comparison between this method of calculating output factors and two conventional methods is also presented.
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin
2006-01-01
A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.
NASA Astrophysics Data System (ADS)
Heeb, Peter; Tschanun, Wolfgang; Buser, Rudolf
2012-03-01
A comprehensive and completely parameterized model is proposed to determine the related electrical and mechanical dynamic system response of a voltage-driven capacitive coupled micromechanical switch. As an advantage over existing parameterized models, the model presented in this paper returns within few seconds all relevant system quantities necessary to design the desired switching cycle. Moreover, a sophisticated and detailed guideline is given on how to engineer a MEMS switch. An analytical approach is used throughout the modelling, providing representative coefficients in a set of two coupled time-dependent differential equations. This paper uses an equivalent mass moving along the axis of acceleration and a momentum absorption coefficient. The model describes all the energies transferred: the energy dissipated in the series resistor that models the signal attenuation of the bias line, the energy dissipated in the squeezed film, the stored energy in the series capacitor that represents a fixed separation in the bias line and stops the dc power in the event of a short circuit between the RF and dc path, the energy stored in the spring mechanism, and the energy absorbed by mechanical interaction at the switch contacts. Further, the model determines the electrical power fed back to the bias line. The calculated switching dynamics are confirmed by the electrical characterization of the developed RF switch. The fabricated RF switch performs well, in good agreement with the modelled data, showing a transition time of 7 µs followed by a sequence of bounces. Moreover, the scattering parameters exhibit an isolation in the off-state of >8 dB and an insertion loss in the on-state of <0.6 dB up to frequencies of 50 GHz. The presented model is intended to be integrated into standard circuit simulation software, allowing circuit engineers to design the switch bias line, to minimize induced currents and cross actuation, as well as to find the mechanical structure dimensions
SCRIT electron scattering facility
NASA Astrophysics Data System (ADS)
Tsukada, Kyo
2014-09-01
Electron scattering is the most powerful and reliable tool to investigate the nuclear structure because this reaction has the great advantage that the electron is structureless particle and its interaction is well described by the quantum electrodynamics. As is well known, the charge density distributions of many stable nuclei were determined by elastic electron scattering. Recently, many efforts for studies of unstable nuclei have been made, and the precise information of the structure of unstabe nuclei have been strongly desired. However, due to the difficulty of preparing a short-lived unstable nuclear target, there is no electron scattering on unstable nuclei with a few important exceptions, such as on 3H, 14C and so on. Under these circumstances, we have established a completely new target-forming technique, namely SCRIT (Self-Confining Radioactive isotope Ion Target) which makes electron scattering on unstable nuclei possible. A Dedicated electron scattering facility at RIKEN consists of an electron accelerator with the SCRIT system, an ERIS (Electron-beam-driven RI separator for SCRIT), and a WiSES (Window-frame Spectrometer for Electron Scattering). Feasibility test of the SCRIT and ERIS system have been successfully carried out using the stable nuclei, and more than 1026 [cm-2s-1] luminosity was already achieved. Furthermore, 132Sn, which is one of the important target at the beginning of this project, was also successfully separated in the ERIS. The WiSES with momentum resolution of Δp/p ~ 10-3 consisting of the wide acceptance dipole magnet, two set of drift chambers together with trigger scintillation hodoscope is under construction. Electron scattering on unstable nuclei will start within a year. In this talk, the introduction of our project and the progress of the preparation status will be presented.
Anthony Prenni; Kreidenweis, Sonia M.
2012-09-28
Clouds play an important role in weather and climate. In addition to their key role in the hydrologic cycle, clouds scatter incoming solar radiation and trap infrared radiation from the surface and lower atmosphere. Despite their importance, feedbacks involving clouds remain as one of the largest sources of uncertainty in climate models. To better simulate cloud processes requires better characterization of cloud microphysical processes, which can affect the spatial extent, optical depth and lifetime of clouds. To this end, we developed a new parameterization to be used in numerical models that describes the variation of ice nuclei (IN) number concentrations active to form ice crystals in mixed-phase (water droplets and ice crystals co-existing) cloud conditions as these depend on existing aerosol properties and temperature. The parameterization is based on data collected using the Colorado State University continuous flow diffusion chamber in aircraft and ground-based campaigns over a 14-year period, including data from the DOE-supported Mixed-Phase Arctic Cloud Experiment. The resulting relationship is shown to more accurately represent the variability of ice nuclei distributions in the atmosphere compared to currently used parameterizations based on temperature alone. When implemented in one global climate model, the new parameterization predicted more realistic annually averaged cloud water and ice distributions, and cloud radiative properties, especially for sensitive higher latitude mixed-phase cloud regions. As a test of the new global IN scheme, it was compared to independent data collected during the 2008 DOE-sponsored Indirect and Semi-Direct Aerosol Campaign (ISDAC). Good agreement with this new data set suggests the broad applicability of the new scheme for describing general (non-chemically specific) aerosol influences on IN number concentrations feeding mixed-phase Arctic stratus clouds. Finally, the parameterization was implemented into a regional
Parameterization and analysis of 3-D radiative transfer in clouds
Varnai, Tamas
2012-03-16
This report provides a summary of major accomplishments from the project. The project examines the impact of radiative interactions between neighboring atmospheric columns, for example clouds scattering extra sunlight toward nearby clear areas. While most current cloud models don't consider these interactions and instead treat sunlight in each atmospheric column separately, the resulting uncertainties have remained unknown. This project has provided the first estimates on the way average solar heating is affected by interactions between nearby columns. These estimates have been obtained by combining several years of cloud observations at three DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility sites (in Alaska, Oklahoma, and Papua New Guinea) with simulations of solar radiation around the observed clouds. The importance of radiative interactions between atmospheric columns was evaluated by contrasting simulations that included the interactions with those that did not. This study provides lower-bound estimates for radiative interactions: It cannot consider interactions in cross-wind direction, because it uses two-dimensional vertical cross-sections through clouds that were observed by instruments looking straight up as clouds drifted aloft. Data from new DOE scanning radars will allow future radiative studies to consider the full three-dimensional nature of radiative processes. The results reveal that two-dimensional radiative interactions increase overall day-and-night average solar heating by about 0.3, 1.2, and 4.1 Watts per meter square at the three sites, respectively. This increase grows further if one considers that most large-domain cloud simulations have resolutions that cannot specify small-scale cloud variability. For example, the increases in solar heating mentioned above roughly double for a fairly typical model resolution of 1 km. The study also examined the factors that shape radiative interactions between atmospheric columns and
NASA Astrophysics Data System (ADS)
Piskozub, Jacek; Wróbel, Iwona
2016-04-01
The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations
Phenol removal pretreatment process
Hames, Bonnie R.
2004-04-13
A process for removing phenols from an aqueous solution is provided, which comprises the steps of contacting a mixture comprising the solution and a metal oxide, forming a phenol metal oxide complex, and removing the complex from the mixture.
Krawiec, Donald F.; Kraf, Robert J.; Houser, Robert J.
1988-01-01
An apparatus for removing debris from a turbomachine. The apparatus includes housing and remotely operable viewing and grappling mechanisms for the purpose of locating and removing debris lodged between adjacent blades in a turbomachine.
Wart removers are medicines used to get rid of warts. Warts are small growths on the skin that are caused by a virus. They are usually painless. Wart remover poisoning occurs when someone swallows or uses ...
Spider veins: How are they removed? I have spider veins on my legs. What options are available ... M.D. Several options are available to remove spider veins — thin red lines or weblike networks of ...
Laparoscopic Adrenal Gland Removal
... malignant. Laparoscopic Adrenal Gland Removal What are the Advantages of Laparoscopic Adrenal Gland Removal? In the past, ... of procedure and the patients overall condition. Common advantages are: Less postoperative pain Shorter hospital stay Quicker ...
Riley, David G.; Gill, Clare A.; Herring, Andy D.; Riggs, Penny K.; Sawyer, Jason E.; Sanders, James O.
2014-01-01
Gestation length, birth weight, and weaning weight of F2 Nelore-Angus calves (n = 737) with designed extensive full-sibling and half-sibling relatedness were evaluated for association with 34,957 SNP markers. In analyses of birth weight, random relatedness was modeled three ways: 1) none, 2) random animal, pedigree-based relationship matrix, or 3) random animal, genomic relationship matrix. Detected birth weight-SNP associations were 1,200, 735, and 31 for those parameterizations respectively; each additional model refinement removed associations that apparently were a result of the built-in stratification by relatedness. Subsequent analyses of gestation length and weaning weight modeled genomic relatedness; there were 40 and 26 trait-marker associations detected for those traits, respectively. Birth weight associations were on BTA14 except for a single marker on BTA5. Gestation length associations included 37 SNP on BTA21, 2 on BTA27 and one on BTA3. Weaning weight associations were on BTA14 except for a single marker on BTA10. Twenty-one SNP markers on BTA14 were detected in both birth and weaning weight analyses. PMID:25249774
Riley, David G; Gill, Clare A; Herring, Andy D; Riggs, Penny K; Sawyer, Jason E; Sanders, James O
2014-09-01
Gestation length, birth weight, and weaning weight of F2 Nelore-Angus calves (n = 737) with designed extensive full-sibling and half-sibling relatedness were evaluated for association with 34,957 SNP markers. In analyses of birth weight, random relatedness was modeled three ways: 1) none, 2) random animal, pedigree-based relationship matrix, or 3) random animal, genomic relationship matrix. Detected birth weight-SNP associations were 1,200, 735, and 31 for those parameterizations respectively; each additional model refinement removed associations that apparently were a result of the built-in stratification by relatedness. Subsequent analyses of gestation length and weaning weight modeled genomic relatedness; there were 40 and 26 trait-marker associations detected for those traits, respectively. Birth weight associations were on BTA14 except for a single marker on BTA5. Gestation length associations included 37 SNP on BTA21, 2 on BTA27 and one on BTA3. Weaning weight associations were on BTA14 except for a single marker on BTA10. Twenty-one SNP markers on BTA14 were detected in both birth and weaning weight analyses.
NASA Astrophysics Data System (ADS)
Ganzeveld, Laurens; Lelieveld, Jos
1995-10-01
A dry deposition scheme has been developed for the chemistry general circulation model to improve the description of the removal of chemically reactive trace gases at the earth's surface. The chemistry scheme simulates background CH4-CO-NOx- HOx photochemistry and calculates concentrations of, for example, HNO3, NOx, and O3. A resistance analog is used to parameterize the dry deposition velocity for these gases. The aerodynamic resistance is calculated from the model boundary layer stability, wind speed, and surface roughness, and a quasi-laminar boundary layer resistance is incorporated. The stomatal resistance is explicitly calculated and combined with representative cuticle and mesophyll resistances for each trace gas. The new scheme contributes to internal consistency in the model, in particular with respect to diurnal and seasonal cycles in both the chemistry and the planetary boundary layer processes and surface characteristics that control dry deposition. Evaluation of the model indicates satisfactory agreement between calculated and observed deposition velocities. Comparison of the results with model simulations in which the deposition velocity was kept constant indicates significant relative differences in deposition fluxes and surface layer trace gas concentrations up to about ±35%. Shortcomings are discussed, for example, violation of the constant flux approach for the surface layer, the lacking canopy description, and effects of surface water layers.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
NASA Technical Reports Server (NTRS)
Schaetzel, Klaus
1989-01-01
Since the development of laser light sources and fast digital electronics for signal processing, the classical discipline of light scattering on liquid systems experienced a strong revival plus an enormous expansion, mainly due to new dynamic light scattering techniques. While a large number of liquid systems can be investigated, ranging from pure liquids to multicomponent microemulsions, this review is largely restricted to applications on Brownian particles, typically in the submicron range. Static light scattering, the careful recording of the angular dependence of scattered light, is a valuable tool for the analysis of particle size and shape, or of their spatial ordering due to mutual interactions. Dynamic techniques, most notably photon correlation spectroscopy, give direct access to particle motion. This may be Brownian motion, which allows the determination of particle size, or some collective motion, e.g., electrophoresis, which yields particle mobility data. Suitable optical systems as well as the necessary data processing schemes are presented in some detail. Special attention is devoted to topics of current interest, like correlation over very large lag time ranges or multiple scattering.
Parameterizations of Cloud Microphysics and Indirect Aerosol Effects
Tao, Wei-Kuo
2014-05-19
/hail. Each type is described by a special size distribution function containing 33 categories (bins). Atmospheric aerosols are also described using number density size-distribution functions (containing 33 bins). Droplet nucleation (activation) is derived from the analytical calculation of super-saturation, which is used to determine the sizes of aerosol particles to be activated and the corresponding sizes of nucleated droplets. Primary nucleation of each type of ice crystal takes place within certain temperature ranges. A detailed description of these explicitly parameterized processes can be found in Khain and Sednev (1996) and Khain et al. (1999, 2001). 2.3 Case Studies Three cases, a tropical oceanic squall system observed during TOGA COARE (Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment, which occurred over the Pacific Ocean warm pool from November 1992 to February 1993), a midlatitude continental squall system observed during PRESTORM (Preliminary Regional Experiment for STORM-Central, which occurred in Kansas and Oklahoma during May-June 1985), and mid-afternoon convection observed during CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and Cirrus Layers – Florida Area Cumulus Experiment, which occurred in Florida during July 2002), will be used to examine the impact of aerosols on deep, precipitating systems. 3. SUMMARY of RESULTS • For all three cases, higher CCN produces smaller cloud droplets and a narrower spectrum. Dirty conditions delay rain formation, increase latent heat release above the freezing level, and enhance vertical velocities at higher altitude for all cases. Stronger updrafts, deeper mixed-phase regions, and more ice particles are simulated with higher CCN in good agreement with observations. • In all cases, rain reaches the ground early with lower CCN. Rain suppression is also evident in all three cases with high CCN in good agreement with observations (Rosenfeld, 1999, 2000 and others). Rain
Not Available
1988-03-31
The directive contains general policy guidelines regarding removal program priorities as it specifically relates to the 10 regional offices. Emphasis is placed on addressing the most serious public health and environmental threats (classic emergencies, time-critical removals at NPL sites, and time-critical removals at non-NPL sites). Regions are urged to pursue cleanup by the responsible parties (RP) and manage the removal program within the boundaries of their resources.
Weakly supervised glasses removal
NASA Astrophysics Data System (ADS)
Wang, Zhicheng; Zhou, Yisu; Wen, Lijie
2015-03-01
Glasses removal is an important task on face recognition, in this paper, we provide a weakly supervised method to remove eyeglasses from an input face image automatically. We choose sparse coding as face reconstruction method, and optical flow to find exact shape of glasses. We combine the two processes iteratively to remove glasses more accurately. The experimental results reveal that our method works much better than these algorithms alone, and it can remove various glasses to obtain natural looking glassless facial images.
Fiber optic probe for light scattering measurements
Nave, S.E.; Livingston, R.R.; Prather, W.S.
1993-01-01
This invention is comprised of a fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman- scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.
Fiber optic probe for light scattering measurements
Nave, Stanley E.; Livingston, Ronald R.; Prather, William S.
1995-01-01
A fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman-scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.
Presentation will discuss the state-of-art technology for removal of arsenic from drinking water. Presentation includes results of several EPA field studies on removal of arsenic from existing arsenic removal plants and key results from several EPA sponsored research studies. T...
Electromagnetic scattering theory
NASA Technical Reports Server (NTRS)
Bird, J. F.; Farrell, R. A.
1986-01-01
Electromagnetic scattering theory is discussed with emphasis on the general stochastic variational principle (SVP) and its applications. The stochastic version of the Schwinger-type variational principle is presented, and explicit expressions for its integrals are considered. Results are summarized for scalar wave scattering from a classic rough-surface model and for vector wave scattering from a random dielectric-body model. Also considered are the selection of trial functions and the variational improvement of the Kirchhoff short-wave approximation appropriate to large size-parameters. Other applications of vector field theory discussed include a general vision theory and the analysis of hydromagnetism induced by ocean motion across the geomagnetic field. Levitational force-torque in the magnetic suspension of the disturbance compensation system (DISCOS), now deployed in NOVA satellites, is also analyzed using the developed theory.
NASA Astrophysics Data System (ADS)
Gomez, Humberto
2016-06-01
The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.
NASA Astrophysics Data System (ADS)
Bahadur, Birendra
The following sections are included: * INTRODUCTION * CELL DESIGNING * EXPERIMENTAL OBSERVATIONS IN NEMATICS RELATED WITH DYNAMIC SCATTERING * Experimental Observations at D.C. Field and Electrode Effects * Experimental Observation at Low Frequency A.C. Fields * Homogeneously Aligned Nematic Regime * Williams Domains * Dynamic Scattering * Experimental Observation at High Frequency A.C. Field * Other Experimental Observations * THEORETICAL INTERPRETATIONS * Felici Model * Carr-Helfrich Model * D.C. Excitation * Dubois-Violette, de Gennes and Parodi Model * Low Freqency or Conductive Regime * High Frequency or Dielectric Regime * DYNAMIC SCATTERING IN SMECRIC A PHASE * ELECTRO-OPTICAL CHARACTERISTICS AND LIMITATIONS * Contrast Ratio vs. Voltage, Viewing Angle, Cell Gap, Wavelength and Temperature * Display Current vs. Voltage, Cell Gap and Temperature * Switching Time * Effect of Alignment * Effect of Conductivity, Temperature and Frequency * Addressing of DSM LCDs * Limitations of DSM LCDs * ACKNOWLEDGEMENTS * REFERENCES
ZALIZNYAK,I.A.; LEE,S.H.
2004-07-30
Much of our understanding of the atomic-scale magnetic structure and the dynamical properties of solids and liquids was gained from neutron-scattering studies. Elastic and inelastic neutron spectroscopy provided physicists with an unprecedented, detailed access to spin structures, magnetic-excitation spectra, soft-modes and critical dynamics at magnetic-phase transitions, which is unrivaled by other experimental techniques. Because the neutron has no electric charge, it is an ideal weakly interacting and highly penetrating probe of matter's inner structure and dynamics. Unlike techniques using photon electric fields or charged particles (e.g., electrons, muons) that significantly modify the local electronic environment, neutron spectroscopy allows determination of a material's intrinsic, unperturbed physical properties. The method is not sensitive to extraneous charges, electric fields, and the imperfection of surface layers. Because the neutron is a highly penetrating and non-destructive probe, neutron spectroscopy can probe the microscopic properties of bulk materials (not just their surface layers) and study samples embedded in complex environments, such as cryostats, magnets, and pressure cells, which are essential for understanding the physical origins of magnetic phenomena. Neutron scattering is arguably the most powerful and versatile experimental tool for studying the microscopic properties of the magnetic materials. The magnitude of the cross-section of the neutron magnetic scattering is similar to the cross-section of nuclear scattering by short-range nuclear forces, and is large enough to provide measurable scattering by the ordered magnetic structures and electron spin fluctuations. In the half-a-century or so that has passed since neutron beams with sufficient intensity for scattering applications became available with the advent of the nuclear reactors, they have became indispensable tools for studying a variety of important areas of modern science
Quaglioni, S; Navratil, P; Roth, R
2009-12-15
The exact treatment of nuclei starting from the constituent nucleons and the fundamental interactions among them has been a long-standing goal in nuclear physics. Above all nuclear scattering and reactions, which require the solution of the many-body quantum-mechanical problem in the continuum, represent an extraordinary theoretical as well as computational challenge for ab initio approaches.We present a new ab initio many-body approach which derives from the combination of the ab initio no-core shell model with the resonating-group method [4]. By complementing a microscopic cluster technique with the use of realistic interactions, and a microscopic and consistent description of the nucleon clusters, this approach is capable of describing simultaneously both bound and scattering states in light nuclei. We will discuss applications to neutron and proton scattering on sand light p-shell nuclei using realistic nucleon-nucleon potentials, and outline the progress toward the treatment of more complex reactions.
Ab initio parameterization of YFF1, a universal force field for drug-design applications.
Yakovenko, Olexandr Ya; Li, Yvonne Y; Oliferenko, Alexander A; Vashchenko, Ganna M; Bdzhola, Volodymyr G; Jones, Steven J M
2012-02-01
The YFF1 is a new universal molecular mechanic force field designed for drug discovery purposes. The electrostatic part of YFF1 has already been parameterized to reproduce ab initio calculated dipole and quadrupole moments. Now we report a parameterization of the van der Waals interactions (vdW) for the same atom types that were previously defined. The 6-12 Lennard-Jones potential terms were parameterized against homodimerization energies calculated at the MP2/6-31 G level of theory. The Boys-Bernardi counterpoise correction was employed to account for the basis-set superposition error. As a source of structural information we used about 2,400 neutral compounds from the ZINC2007 database. About 6,600 homodimeric configurations were generated from this dataset. A special "closure" procedure was designed to accelerate the parameters fitting. As a result, dimerization energies of small organic compounds are reproduced with an average unsigned error of 1.1 kcal mol(-1). Although the primary goal of this work was to parameterize nonbonded interactions, bonded parameters were also derived, by fitting to PM6 semiempirically optimized geometries of approximately 20,000 compounds.
Evans, J.L.; Frank, W.M.; Young, G.S.
1996-04-01
Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.
1997-09-30
regions, and assimilation methods . In general, the accuracy of both short-term and long-term predictions improves signif- icantly with the assimilation of...Foundation (TAO Project). Assimilation methods for nonlinear Lagrangian processes and parameterization of turbu- lent phenomena using stochastic models are
Toward the theory of homogeneous nucleation and its parameterization for cloud models
NASA Astrophysics Data System (ADS)
Khvorostyanov, Vitaly; Sassen, Kenneth
Following the classical approach in homogeneous nucleation theory, a general but simple expression for the homogeneous freezing rate is derived with account for solution and curvature effects, and applied to the examples of ammonium sulfate and sulfuric acid solutions. After showing that this method compares well with other approaches, a parameterization suitable for use in various cloud models is discussed.
A New Approach to the Parameterization Method for Lagrangian Tori of Hamiltonian Systems
NASA Astrophysics Data System (ADS)
Villanueva, Jordi
2017-04-01
We compute invariant Lagrangian tori of analytic Hamiltonian systems by the parameterization method. Under Kolmogorov's non-degeneracy condition, we look for an invariant torus of the system carrying quasi-periodic motion with fixed frequencies. Our approach consists in replacing the invariance equation of the parameterization of the torus by three conditions which are altogether equivalent to invariance. We construct a quasi-Newton method by solving, approximately, the linearization of the functional equations defined by these three conditions around an approximate solution. Instead of dealing with the invariance error as a single source of error, we consider three different errors that take account of the Lagrangian character of the torus and the preservation of both energy and frequency. The condition of convergence reflects at which level contributes each of these errors to the total error of the parameterization. We do not require the system to be nearly integrable or to be written in action-angle variables. For nearly integrable Hamiltonians, the Lebesgue measure of the holes between invariant tori predicted by this parameterization result is of O(ɛ ^{1/2}), where ɛ is the size of the perturbation. This estimate coincides with the one provided by the KAM theorem.
A Comparison of Cumulus Parameterizations in Idealized Sea-Breeze Simulations
NASA Technical Reports Server (NTRS)
Cohen, Charles; Arnold, James E. (Technical Monitor)
2001-01-01
Four cumulus parameterizations in the Penn State-NCAR model MM5 are compared in idealized sea-breeze simulations, with the aim of discovering why they work as they do. The most realistic results appear to be those using the Kain-Fritsch scheme. Rainfall is significantly delayed with the Betts-Miller-Janjic scheme, due to the method of computing the reference sounding. This method can be corrected, but downdrafts should be added in a physically realistic manner. Even without downdrafts, a corrected version of the BMJ scheme produces nearly the same timing and location of deep convection as the KF scheme, despite the very different physics. In order to simulate the correct timing of the rainfall, a minimum amount of mass is required in the layer that is the source of a parameterized updraft. The Grell parameterization, in the present simulation, always derives the updraft from the top of the mixed layer, where vertical advection predominates over horizontal advection in increasing the moist static energy. This makes the application of the quasi-equilibrium closure more correct than it would be if the updrafts were always derived from the most unstable layer, but it evades the question of whether or not horizontal advection generates instability. Using different physics, the parameterizations produce significantly different cloud-top heights.
Representing Soil Moisture Heterogeneity in the "Super-Parameterized" Community Earth System Model
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2014-12-01
An approach to representing soil moisture heterogeneity in land-surface models using bins of soil moisture, advancing on the method developed by Sellers et al., 2003 is presented. Structuring land-surface models in this fashion presents a desirable structure for coupling to atmospheric models utilizing the "multi-scale modeling framework", called "super-parameterization" in the Community Earth System Model, CESM. The multi-scale modeling framework substitutes conventional cloud parameterizations with a 2-D cloud-resolving model. By considering soil moisture heterogeneity, the land-surface model is able to utilize the distribution of precipitation simulated by the cloud-resolving model in the super-parameterization, rather than it's summed total.Additionally, treatments of gravitational drainage and runoff in the binned model are proposed and assessed. This is, in general, a conceptual addition to the binned approach of Sellers et al.; but it is also particularly motivated by the fine grid resolution of the cloud-resolving model used in super-parameterization, typically 2km.Preliminary results suggest that the binned-approach improves model representation of dry-down following rain events and may help mitigate some of the excessive latent heat fluxes simulated by the standard land model in the CESM.
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2017-01-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
Sensitivity of Cloud Microphysics to Choice of Physics Parameterizations : a Crm Study
NASA Astrophysics Data System (ADS)
Sarangi, C.; Tripathi, S. N.
2013-12-01
Weather Research and Forecasting regional meteorological model coupled with chemistry (WRF-Chem) is being used widely to study direct and indirect effect of aerosols. The results of a numerical study on the impacts of aerosols on meteorology and microphysics depend on the accuracy with which the model parameterizes any weather conditions. This study investigates the sensitivity of simulated hydrometeors at 3 km resolution over Northern India, to different types of microphysics parameterizations, such as ETA microphysics (only warm rain processes are parameterized), LIN microphysics (single moment including ice processes) and Morrison microphysics (double moment including ice processes) and planetary boundary layer parameterizations (Yonsei and MYJ). WRF's ability to simulate the vertical and horizontal distribution of hydrometeors using cloud resolving mode is evaluated using in-situ aircraft measurements of hydrometeors during CAIPEEX campaign and cloud products from MODIS satellite. The results suggest that the model underestimates the mass concentration of hydrometeors, but can reasonably simulate the distribution of hydrometeors at most places, except a few places where the hydrometeors are modeled simulated at higher altitude than observed. Ongoing work includes aerosol impact on these simulated hydrometeors distribution.
Knight, Jennifer L; Yesselman, Joseph D; Brooks, Charles L
2013-04-30
Multipurpose atom-typer for CHARMM (MATCH), an atom-typing toolset for molecular mechanics force fields, was recently developed in our laboratory. Here, we assess the ability of MATCH-generated parameters and partial atomic charges to reproduce experimental absolute hydration free energies for a series of 457 small neutral molecules in GBMV2, Generalized Born with a smooth SWitching (GBSW), and fast analytical continuum treatment of solvation (FACTS) implicit solvent models. The quality of hydration free energies associated with small molecule parameters obtained from ParamChem, SwissParam, and Antechamber are compared. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, these automated parameterization schemes with GBMV2 and GBSW demonstrate reasonable agreement with experimental hydration free energies (average unsigned errors of 0.9-1.5 kcal/mol and R(2) of 0.63-0.87). GBMV2 and GBSW consistently provide slightly more accurate estimates than FACTS, whereas Antechamber parameters yield marginally more accurate estimates than the current generation of MATCH, ParamChem, and SwissParam parameterization strategies. Modeling with MATCH libraries that are derived from different CHARMM topology and parameter files highlights the importance of having sufficient coverage of chemical space within the underlying databases of these automated schemes and the benefit of targeting specific functional groups for parameterization efforts to maximize both the breadth and the depth of the parameterized space.
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...
Regularized kernel PCA for the efficient parameterization of complex geological models
NASA Astrophysics Data System (ADS)
Vo, Hai X.; Durlofsky, Louis J.
2016-10-01
The use of geological parameterization procedures enables high-fidelity geomodels to be represented in terms of relatively few variables. Such parameterizations are particularly useful when the subspace representation is constructed to implicitly capture the key geological features that appear in prior geostatistical realizations. In this case, the parameterization can be used very effectively within a data assimilation framework. In this paper, we extend and apply geological parameterization techniques based on kernel principal component analysis (KPCA) for the representation of complex geomodels characterized by non-Gaussian spatial statistics. KPCA involves the application of PCA in a high-dimensional feature space and the subsequent reverse mapping of the feature-space model back to physical space. This reverse mapping, referred to as the pre-image problem, can be challenging because it (formally) involves a nonlinear minimization. In this work, a new explicit pre-image procedure, which avoids many of the problems with existing approaches, is introduced. To achieve (ensemble-level) flow responses in close agreement with those from reference geostatistical realizations, a bound-constrained, regularized version of KPCA, referred to as R-KPCA, is also introduced. R-KPCA can be viewed as a post-processing of realizations generated using KPCA. The R-KPCA representation is incorporated into an adjoint-gradient-based data assimilation procedure, and its use for history matching a complex deltaic fan system is demonstrated. Matlab code for the KPCA and R-KPCA procedures is provided online as Supplementary Material.
Creating a parameterized model of a CMOS transistor with a gate of enclosed layout
NASA Astrophysics Data System (ADS)
Vinogradov, S. M.; Atkin, E. V.; Ivanov, P. Y.
2016-02-01
The method of creating a parameterized spice model of an N-channel transistor with a gate of enclosed layout is considered. Formulas and examples of engineering calculations for use of models in the computer-aided Design environment of Cadence Vitruoso are presented. Calculations are made for the CMOS technology with 180 nm design rules of the UMC.
NASA Astrophysics Data System (ADS)
Zedler, S. E.; Niiler, P. P.; Stammer, D.; Terrill, E.; Morzel, J.
2009-04-01
The drag coefficient parameterization of wind stress is investigated for tropical storm conditions using model sensitivity studies. The Massachusetts Institute of Technology (MIT) Ocean General Circulation Model was run in a regional setting with realistic stratification and forcing fields representing Hurricane Frances, which in early September 2004 passed east of the Caribbean Leeward Island chain. The model was forced with a NOAA-HWIND wind speed product after converting it to wind stress using four different drag coefficient parameterizations. Respective model results were tested against in situ measurements of temperature profiles and velocity, available from an array of 22 surface drifters and 12 subsurface floats. Changing the drag coefficient parameterization from one that saturated at a value of 2.3 × 10-3 to a constant drag coefficient of 1.2 × 10-3 reduced the standard deviation difference between the simulated minus the measured sea surface temperature change from 0.8°C to 0.3°C. Additionally, the standard deviation in the difference between simulated minus measured high pass filtered 15-m current speed reduced from 15 cm/s to 5 cm/s. The maximum difference in sea surface temperature response when two different turbulent mixing parameterizations were implemented was 0.3°C, i.e., only 11% of the maximum change of sea surface temperature caused by the storm.
Technology Transfer Automated Retrieval System (TEKTRAN)
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
Random number generation from spontaneous Raman scattering
NASA Astrophysics Data System (ADS)
Collins, M. J.; Clark, A. S.; Xiong, C.; Mägi, E.; Steel, M. J.; Eggleton, B. J.
2015-10-01
We investigate the generation of random numbers via the quantum process of spontaneous Raman scattering. Spontaneous Raman photons are produced by illuminating a highly nonlinear chalcogenide glass ( As 2 S 3 ) fiber with a CW laser at a power well below the stimulated Raman threshold. Single Raman photons are collected and separated into two discrete wavelength detuning bins of equal scattering probability. The sequence of photon detection clicks is converted into a random bit stream. Postprocessing is applied to remove detector bias, resulting in a final bit rate of ˜650 kb/s. The collected random bit-sequences pass the NIST statistical test suite for one hundred 1 Mb samples, with the significance level set to α = 0.01 . The fiber is stable, robust and the high nonlinearity (compared to silica) allows for a short fiber length and low pump power favourable for real world application.
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare
NASA Astrophysics Data System (ADS)
Zheng, Z.; Zhang, W.; Xu, J.
2011-12-01
As a key component of the global water cycle, runoff plays an important role in earth climate system by affecting the land surface water and energy balance. Realistic runoff parameterization within land surface model (LSM) is significant for accurate land surface modeling and numerical weather and climate prediction. Hence, optimization and refinement of runoff formulation in LSM can further improve model predictive capability of surface-to-atmosphere fluxes which influences the complex interactions between the land surface and atmosphere. Moreover, the performance of runoff simulation in LSM would essential to drought and flood prediction and warning. In this study, a new runoff parameterization named XXT (Xin'anjiang x TOPMODEL) was developed by introducing the water table depth into the soil moisture storage capacity distribution curve (SMSCC) from Xin'anjiang model for surface runoff calculation improvement and then integrating with a TOPMODEL-based groundwater scheme. Several studies had already found a strong correlation between the water table depth and land surface processes. In this runoff parameterization, the dynamic variation of surface and subsurface runoff calculation is connected in a systematic way through the change of water table depth. The XXT runoff parameterization was calibrated and validated with datasets both from observation and Weather Research & Forecasting model (WRF) outputs, the results with high Nash-efficiency coefficient indicated that it has reliable capability of runoff simulation in different climate regions. After model test, the XXT runoff parameterization is coupled with the unified Noah LSM 3.2 instead of simple water balance model (SWB) in order to alleviate the runoff simulating bias which may lead to poor energy partition and evaporation. The impact of XXT is investigated through application of a whole year (1998) simulation at surface flux site of Champaign, Illinois (40.01°N, 88.37°W). The results show that Noah
NASA Astrophysics Data System (ADS)
Aronson, E. L.; Helliker, B. R.; Strode, S. A.; Pawson, S.
2011-12-01
Global soil methane consumption was estimated using multiple regression-based parameterizations by vegetation type from a meta-dataset created from 780 published methane flux measurements. The average global estimates for soil consumption by extrapolation, without taking snow cover into account, totaled 54-60 Tg annually. The parameterizations were based on air temperature and precipitation output variables reported in the literature and gathered in the meta-dataset. These variables were matched to similar ones reported in the Goddard Earth Observing System (GEOS) global climate model. The methane uptake response to increasing precipitation and temperature varied between vegetation types. The parameterizations for methane fluxes by vegetation type were included in a 20 year, free-running, tagged-methane run of the GEOS-5 model constrained by real observations of sea surface temperature. Snow cover was assumed to block methane diffusion into the soil and therefore result in zero consumption of methane in snow-covered soils. The parameterization estimates was slightly higher than previous estimates of global methane consumption, at around 37 Tg annually. The resultant global surface methane concentration was then compared to observed methane concentrations from NOAA Global Monitoring Division sites worldwide, with varying agreement. The parameterization for the vegetation type "Needleleaf Trees" predicted methane consumption in a study site located in the NJ Pinelands, which was studied in 2009. The estimate of methane consumption by the vegetation type "Broadleaf Evergreen Trees" was found to have the greatest error, which may indicate that the factors on which the parameterization was based are of minor importance in regulating methane flux within this vegetation type. The results were compared to offline runs of the parameterizations without the snow-cover compensation, which resulted in global rates of almost double the methane consumption. Since there have been
Graphitic packing removal tool
Meyers, Kurt Edward; Kolsun, George J.
1997-01-01
Graphitic packing removal tools for removal of the seal rings in one piece. he packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal.
Graphitic packing removal tool
Meyers, K.E.; Kolsun, G.J.
1997-11-11
Graphitic packing removal tools for removal of the seal rings in one piece are disclosed. The packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal. 5 figs.
Graphitic packing removal tool
Meyers, K.E.; Kolsun, G.J.
1996-12-31
Graphitic packing removal tools are described for removal of the seal rings in one piece from valves and pumps. The packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal.
NASA Astrophysics Data System (ADS)
Gao, Yanchun; Gan, Guojing; Liu, Maofeng; Wang, Jinfeng
2016-10-01
Soil evaporation is an important component in the water and energy cycles on land, especially for areas that are moderately or densely covered by bare soil. Soil evaporation parameterizations that scale down potential evaporation with the soil surface temperature (Ts) and/or the air humidity are regionally applicable because of the advantage of omitting pixel-scale near-surface soil moisture. In this paper, we provide an intercomparison study among these parameterizations. Potential evaporation indices are estimated from the Priestley-Taylor method, the Penman method, and the mass transfer method (with or without Ts). The surface dryness indices that indicate the water availability of the soil surface are based on Ts and/or the air humidity. We establish and evaluate ten such soil evaporation parameterizations through combinations of different types of potential evaporation indices and surface dryness indices at near-instantaneous scales (30 min). The results show that incorporating the soil temperature in the surface dryness index instead of the potential evaporation index can improve soil evaporation estimations. Poorer but still reasonable estimations are achieved when only the air humidity-based surface dryness index is used. In addition, the energy balance factor is crucial in the surface dryness indices. Our study indicates that the potential evaporation indices that are based on the Penman equation are generally more useful and robust than those that are based on the Priestley-Taylor approach or the mass transfer method. However, when the surface dryness index is only based on air humidity data, the Priestley-Taylor potential evaporation index performs as well as the index that is estimated from the Penman equation. In contrast, a soil evaporation parameterization that estimates the potential evaporation through the mass transfer method (with Ts) and the surface dryness index from the soil moisture content did not perform as well as the above ten
Effect of physical parameterization schemes on track and intensity of cyclone LAILA using WRF model
NASA Astrophysics Data System (ADS)
Kanase, Radhika D.; Salvekar, P. S.
2015-08-01
The objective of the present study is to investigate in detail the sensitivity of cumulus parameterization (CP), planetary boundary layer (PBL) parameterization, microphysics parameterization (MP) on the numerical simulation of severe cyclone LAILA over Bay of Bengal using Weather Research & Forecasting (WRF) model. The initial and boundary conditions are supplied from GFS data of 1° × 1° resolution and the model is integrated in three `twoway' interactive nested domains at resolutions of 60 km, 20 km and 6.6 km. Total three sets of experiments are performed. First set of experiments include sensitivity of Cumulus Parameterization (CP) schemes, while second and third set of experiments is carried out to check the sensitivity of different PBL and Microphysics Parameterization (MP) schemes. The fourth set contains initial condition sensitivity experiments. For first three sets of experiments, 0000 UTC 17 May 2010 is used as initial condition. In CP sensitivity experiments, the track and intensity is well simulated by Betts-Miller-Janjic (BMJ) schemes. The track and intensity of LAILA is very sensitive to the representation of large scale environmental flow in CP scheme as well as to the initial vertical wind shear values. The intensity of the cyclone is well simulated by YSU scheme and it depends upon the mixing treatment in and above PBL. Concentration of frozen hydrometeors, such as graupel in WSM6 MP scheme and latent heat released during auto conversion of hydrometeors may be responsible for storm intensity. An additional set of experiments with different initial vortex intensity shows that, small differences in the initial wind fields have profound impact on both track and intensity of the cyclone. The representation of the mid-tropospheric heating in WSM6 is mainly controlled by amount of graupel hydrometeor and thus might be one of the possible causes in modulating the storm's intensity.
Semiprognostic test of the Arakawa-Schubert cumulus parameterization using simulated data
Kuan-Man Xu; Aki Arakawa )
1992-12-15
The Arakawa-Schubert (A-S) cumulus parameterization is evaluated by performing semiprognostic tests against data simulated by a cumulus ensemble model (CEM). The CEM is a two-dimensional cloud model for simulating the formation of an ensemble of cumulus clouds under prescribed large-scale conditions. Three simulations, two with vertical wind shear and one without, are performed with identical (time-varying) large-scale advective effects. Detailed comparisons were made between the results of simulation and parameterization. The results include comparisions of surface precipitation rate, apparent heat source, apparent moisture sink, updraft mass flux, and downdraft mass flux. Two different sets of tests were performed. One was the standard A-S parameterization with the cloud work function (CWF) quasi equilibrium, and the other allowed CWF nonequilibrium by accounting for the simulated time change of the CWF. The tests show that the A-S parameterization is valid despite mesoscale organization in cumulus convection. The assumption of CWF quasi equilibrium is more accurate for inputs averaged over smaller subdomain sizes that resolve some mesoscale processes. Errors due to the nondiagnostic aspect of cumulus convection are more significant for inputs averaged over larger subdomain sizes. Errors due to the inherent nondeterministic aspect of cumulus convection appear to be more significant for inputs averaged over smaller subdomain sizes. A modified A-S parameterization with a convective-scale downdraft formulation was also tested against the simulated data. The inclusion of downdrafts slightly improves the results of semiprognostic tests. The impact of downdrafts on the subcloud layer may depend significantly on the subdomain size. 20 refs., 16 figs.
NASA Astrophysics Data System (ADS)
Kumar, P.; Sokolik, I. N.; Nenes, A.
2008-09-01
Dust and black carbon aerosol have long been known to have potentially important and diverse impacts on cloud droplet formation. Most studies to date focus on the soluble fraction of such particles, and ignore interactions of the insoluble fraction with water vapor (even if known to be hydrophilic). To address this gap, we develop a new parameterization framework that considers cloud droplet formation within an ascending air parcel containing insoluble (but wettable) particles mixed with aerosol containing an appreciable soluble fraction. Activation of particles with a soluble fraction is described through well-established Köhler Theory, while the activation of hydrophilic insoluble particles is treated by "adsorption-activation" theory. In the latter, water vapor is adsorbed onto insoluble particles, the activity of which is described by a multilayer Frankel-Halsey-Hill (FHH) adsorption isotherm modified to account for particle curvature. We further develop FHH activation theory, and i) find combinations of the adsorption parameters AFHH, BFHH for which activation into cloud droplets is not possible, and, ii) express activation properties (critical supersaturation) that follow a simple power law with respect to dry particle diameter. Parameterization formulations are developed for sectional and lognormal aerosol size distribution functions. The new parameterization is tested by comparing the parameterized cloud droplet number concentration against predictions with a detailed numerical cloud model, considering a wide range of particle populations, cloud updraft conditions, water vapor condensation coefficient and FHH adsorption isotherm characteristics. The agreement between parameterization and parcel model is excellent, with an average error of 10% and R2 ~0.98.
Roy-Steiner-equation analysis of pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.
2016-04-01
We review the structure of Roy-Steiner equations for pion-nucleon scattering, the solution for the partial waves of the t-channel process ππ → N ¯ N, as well as the high-accuracy extraction of the pion-nucleon S-wave scattering lengths from data on pionic hydrogen and deuterium. We then proceed to construct solutions for the lowest partial waves of the s-channel process πN → πN and demonstrate that accurate solutions can be found if the scattering lengths are imposed as constraints. Detailed error estimates of all input quantities in the solution procedure are performed and explicit parameterizations for the resulting low-energy phase shifts as well as results for subthreshold parameters and higher threshold parameters are presented. Furthermore, we discuss the extraction of the pion-nucleon σ-term via the Cheng-Dashen low-energy theorem, including the role of isospin-breaking corrections, to obtain a precision determination consistent with all constraints from analyticity, unitarity, crossing symmetry, and pionic-atom data. We perform the matching to chiral perturbation theory in the subthreshold region and detail the consequences for the chiral convergence of the threshold parameters and the nucleon mass.
Precision Neutron Scattering Length Measurements with Neutron Interferometry
NASA Astrophysics Data System (ADS)
Huber, M. G.; Arif, M.; Jacobson, D. L.; Pushin, D. A.; Abutaleb, M. O.; Shahi, C. B.; Wietfeldt, F. E.; Black, T. C.
2011-10-01
Since its inception, single-crystal neutron interferometry has often been utilized for precise neutron scattering length, b, measurements. Scattering length data of light nuclei is particularly important in the study of few nucleon interactions as b can be predicted by two + three nucleon interaction (NI) models. As such they provide a critical test of the accuracy 2+3 NI models. Nuclear effective field theories also make use of light nuclei b in parameterizing mean-field behavior. The NIST neutron interferometer and optics facility has measured b to less than 0.8% relative uncertainty in polarized 3He and to less than 0.1% relative uncertainty in H, D, and unpolarized 3He. A neutron interferometer consists of a perfect silicon crystal machined such that there are three separate blades on a common base. Neutrons are Bragg diffracted in the blades to produce two spatially separate (yet coherent) beam paths much like an optical Mach-Zehnder interferometer. A gas sample placed in one of the beam paths of the interferometer causes a phase difference between the two paths which is proportional to b. This talk will focus on the latest scattering length measurement for n-4He which ran at NIST in Fall/Winter 2010 and is currently being analyzed.
Small Angle Neutron Scattering
Urban, Volker S
2012-01-01
Small Angle Neutron Scattering (SANS) probes structural details at the nanometer scale in a non-destructive way. This article gives an introduction to scientists who have no prior small-angle scattering knowledge, but who seek a technique that allows elucidating structural information in challenging situations that thwart approaches by other methods. SANS is applicable to a wide variety of materials including metals and alloys, ceramics, concrete, glasses, polymers, composites and biological materials. Isotope and magnetic interactions provide unique methods for labeling and contrast variation to highlight specific structural features of interest. In situ studies of a material s responses to temperature, pressure, shear, magnetic and electric fields, etc., are feasible as a result of the high penetrating power of neutrons. SANS provides statistical information on significant structural features averaged over the probed sample volume, and one can use SANS to quantify with high precision the structural details that are observed, for example, in electron microscopy. Neutron scattering is non-destructive; there is no need to cut specimens into thin sections, and neutrons penetrate deeply, providing information on the bulk material, free from surface effects. The basic principles of a SANS experiment are fairly simple, but the measurement, analysis and interpretation of small angle scattering data involves theoretical concepts that are unique to the technique and that are not widely known. This article includes a concise description of the basics, as well as practical know-how that is essential for a successful SANS experiment.
Nanowire electron scattering spectroscopy
NASA Technical Reports Server (NTRS)
Hunt, Brian D. (Inventor); Bronikowski, Michael (Inventor); Wong, Eric W. (Inventor); von Allmen, Paul (Inventor); Oyafuso, Fabiano A. (Inventor)
2009-01-01
Methods and devices for spectroscopic identification of molecules using nanoscale wires are disclosed. According to one of the methods, nanoscale wires are provided, electrons are injected into the nanoscale wire; and inelastic electron scattering is measured via excitation of low-lying vibrational energy levels of molecules bound to the nanoscale wire.
Fluorescence and Light Scattering
ERIC Educational Resources Information Center
Clarke, Ronald J.; Oprysa, Anna
2004-01-01
The aim of the mentioned experiment is to aid students in developing tactics for distinguishing between signals originating from fluorescence and light scattering. Also, the experiment provides students with a deeper understanding of the physicochemical bases of each phenomenon and shows that the techniques are actually related.
Small angle neutron scattering
NASA Astrophysics Data System (ADS)
Cousin, Fabrice
2015-10-01
Small Angle Neutron Scattering (SANS) is a technique that enables to probe the 3-D structure of materials on a typical size range lying from ˜ 1 nm up to ˜ a few 100 nm, the obtained information being statistically averaged on a sample whose volume is ˜ 1 cm3. This very rich technique enables to make a full structural characterization of a given object of nanometric dimensions (radius of gyration, shape, volume or mass, fractal dimension, specific area…) through the determination of the form factor as well as the determination of the way objects are organized within in a continuous media, and therefore to describe interactions between them, through the determination of the structure factor. The specific properties of neutrons (possibility of tuning the scattering intensity by using the isotopic substitution, sensitivity to magnetism, negligible absorption, low energy of the incident neutrons) make it particularly interesting in the fields of soft matter, biophysics, magnetic materials and metallurgy. In particular, the contrast variation methods allow to extract some informations that cannot be obtained by any other experimental techniques. This course is divided in two parts. The first one is devoted to the description of the principle of SANS: basics (formalism, coherent scattering/incoherent scattering, notion of elementary scatterer), form factor analysis (I(q→0), Guinier regime, intermediate regime, Porod regime, polydisperse system), structure factor analysis (2nd Virial coefficient, integral equations, characterization of aggregates), and contrast variation methods (how to create contrast in an homogeneous system, matching in ternary systems, extrapolation to zero concentration, Zero Averaged Contrast). It is illustrated by some representative examples. The second one describes the experimental aspects of SANS to guide user in its future experiments: description of SANS spectrometer, resolution of the spectrometer, optimization of spectrometer
NASA Astrophysics Data System (ADS)
Ardie, Wan Ahmad; Sow, Khai Shen; Tangang, Fredolin T.; Hussin, Abdul Ghapor; Mahmud, Mastura; Juneng, Liew
2012-04-01
The performance of four different cumulus parameterization schemes (CPS) in the Weather Research and Forecasting (WRF) model for simulating three heavy rainfall episodes over the southern peninsular Malaysia during the winter monsoon of 2006/2007 were examined. The modelled rainfall was compared with the 3-hourly satellite observation and objectively scored using a verification technique called the acuity-fidelity. The technique is based on minimization of a cost function that is calculated from four parameters taking into account errors in distance, time, intensity, and missed events. All simulations were made for 72 hours for the three episodes starting at 1200 UTC 17 December 2006, 1200 UTC 24 December 2006 and 1200 UTC 11 January 2007, respectively. The four different CPSs used are the new Kain-Fritsch scheme (KF2), the Betts-Miller-Janjic scheme (BMJ), the Grell-Devenyi ensemble scheme (GD) and the older Kain-Fritsch scheme (KF1). While the BMJ scheme shows some success in the second and third episodes, it shows high location errors in the first episode, leading to high acuity errors. The GD, KF2 and KF1 schemes performed poorly, although both the BMJ and GD schemes simulated the observed drastic increase of rainfall at 2100 UTC 18 December 2006 during the first episode. Overall, the KF1 and KF2 schemes produced positive biases in terms of coverage, while the GD scheme showed persistent location bias, producing a scattered line of precipitation over the eastern coastline of peninsular Malaysia. Although the BMJ scheme has better results, its poor performance for the first episode suggests that suitability of CPS may be case dependent.
A Discrete Scatterer Technique for Evaluating Electromagnetic Scattering from Trees
2016-09-01
ARL-TR-7799 ● SEP 2016 US Army Research Laboratory A Discrete Scatterer Technique for Evaluating Electromagnetic Scattering from...longer needed. Do not return it to the originator. ARL-TR-7799 ● SEP 2016 US Army Research Laboratory A Discrete Scatterer Technique ... Technique for Evaluating Electromagnetic Scattering from Trees 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
The Classical Scattering of Waves: Some Analogies with Quantum Scattering
1992-01-01
Code . Approved for public release; distribution is unlimited. 13. Abstract (Maximum 200 words). The scattering of waves in classical physics and quantum...both areas. 92-235222’ 14. Subject Terms. IS. Number of Page. Acoustic scattering , shallow water, waveguide propagation . 27 16. Price Code . 17. Security...Numbers. The Classical Scattering of Waves: Some Analogies with Quantum Scattering Contract ,~~ ~ -V ,~Pom Element NO- 0601153N 6. Author(s). t
Quantitative coherent-scatter-computed tomography
NASA Astrophysics Data System (ADS)
Batchelar, Deidre L.; Westmore, Michael S.; Lai, Hao; Cunningham, Ian A.
1998-07-01
Conventional means of diagnosiing and assessing the progression of osteoporosis, including radiographic absorptiometry and quantitative CT, are directly or indirectly dependent upon bone density. This is, how ever, not always a reliable indicator of fracture risk. Changes in the trabecular structure and bone mineral content (BMC) are thought to provide a better indication of the change of spontaneous fractures occurring. Coherent-scatter CT (CSCT) is a technique which produces images based on the low angle (0 - 10 degrees) x-ray diffraction properties of tissue. Diffraction patterns from an object are acquired using first-generation CT geometry with a diagnostic x-ray image intensifier based system. These patterns are used to reconstruct a series of maps of the angle dependent coherent scatter cross section in a tomographic slice which are dependent upon the molecular structure of the scatterer. Hydroxyapatite has a very different cross section to that of soft tissue, and the CSCT method may, therefore, form the basis for a more direct measure of BMC. Our original CSCT images suffered from a 'cupping' artifact, resulting in increased intensities for pixels at the periphery of the object. This artifact, which is due to self-attenuation of scattered x rays, caused a systematic error of up to 20% in cross-sections measured from a CT image. This effect has been removed by monitoring the transmitted intensity using a photodiode mounted on the primary beam stop, and normalizing the scatter intensity to that of the transmitted beam for each projection. Images reconstructed from data normalized in this way do not exhibit observable attenuation artifacts. Elimination of this artifact enables the determination of accurate quantitative measures of BMC at each pixel in a tomograph.
Integrated Raman and angular scattering of single biological cells
NASA Astrophysics Data System (ADS)
Smith, Zachary J.
2009-12-01
Raman, or inelastic, scattering and angle-resolved elastic scattering are two optical processes that have found wide use in the study of biological systems. Raman scattering quantitatively reports on the chemical composition of a sample by probing molecular vibrations, while elastic scattering reports on the morphology of a sample by detecting structure-induced coherent interference between incident and scattered light. We present the construction of a multimodal microscope platform capable of gathering both elastically and inelastically scattered light from a 38 mum2 region in both epi- and trans-illumination geometries. Simultaneous monitoring of elastic and inelastic scattering from a microscopic region allows noninvasive characterization of a living sample without the need for exogenous dyes or labels. A sample is illuminated either from above or below with a focused 785 nm TEM00 mode laser beam, with elastic and inelastic scattering collected by two separate measurement arms. The measurements may be made either simultaneously, if identical illumination geometries are used, or sequentially, if the two modalities utilize opposing illumination paths. In the inelastic arm, Stokes-shifted light is dispersed by a spectrograph onto a CCD array. In the elastic scattering collection arm, a relay system images the microscope's back aperture onto a CCD detector array to yield an angle-resolved elastic scattering pattern. Post-processing of the inelastic scattering to remove fluorescence signals yields high quality Raman spectra that report on the sample's chemical makeup. Comparison of the elastically scattered pupil images to generalized Lorenz-Mie theory yields estimated size distributions of scatterers within the sample. In this thesis we will present validations of the IRAM instrument through measurements performed on single beads of a few microns in size, as well as on ensembles of sub-micron particles of known size distributions. The benefits and drawbacks of the
Berg, L. K.; Shrivastava, M.; Easter, R. C.; ...
2015-02-24
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convectivemore » cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-24
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it is
FDTD scattered field formulation for scatterers in stratified dispersive media.
Olkkonen, Juuso
2010-03-01
We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.
Angle resolved scatter measurement of bulk scattering in transparent ceramics
NASA Astrophysics Data System (ADS)
Sharma, Saurabh; Miller, J. Keith; Shori, Ramesh K.; Goorsky, Mark S.
2015-02-01
Bulk scattering in polycrystalline laser materials (PLM), due to non-uniform refractive index across the bulk, is regarded as the primary loss mechanism leading to degradation of laser performance with higher threshold and lower output power. The need for characterization techniques towards identifying bulk scatter and assessing the quality. Assessment of optical quality and the identification of bulk scatter have been by simple visual inspection of thin samples of PLMs, thus making the measurements highly subjective and inaccurate. Angle Resolved Scatter (ARS) measurement allows for the spatial mapping of scattered light at all possible angles about a sample, mapping the intensity for both forward scatter and back-scatter regions. The cumulative scattered light intensity, in the forward scatter direction, away from the specular beam is used for the comparison of bulk scattering between samples. This technique employ the detection of scattered light at all angles away from the specular beam directions and represented as a 2-D polar map. The high sensitivity of the ARS technique allows us to compare bulk scattering in different PLM samples which otherwise had similar transmitted beam wavefront distortions.
Laser multi-spectral polarimetric diffuse-scatter imaging
NASA Astrophysics Data System (ADS)
Wang, Yang
Laser multi-spectral polarimetric diffuse scatter (LAMPODS) imaging is an approach that maps an object intrinsic optical scattering properties rather than the scattered light intensity like in conventional imaging. The technique involves comprehensive measurements of the object scattering response function that is to be parameterized with respect to wavelength, polarization, and angular scattering distribution. The LAMPODS images are mappings of the derived parameters, which are more fundamental than conventional images. The LAMPODS imaging system was built based on a system architecture design configured similarly to an optical wireless network that allows multiple communication connections simultaneously among any number of transmitters and receivers. The imaging system was implemented into several sets of experimental apparatuses that can detect Stokes vectors of backward and forward scattered light with laser sources at seven near infrared (NIR) wavelengths and a continuous mid-infrared (mid-IR) spectral range for both macroscopic and microscopic scan imaging applications. The system components, such as transmitters, receivers, image scan unit, and the data acquisition module, were built and/or tested to match the system-design requirements, which involved many optical, opto-mechanical, electronic, and computer programming/interfacing techniques and skills. The experiments performed include the study on the LAMPODS capability with isolated aspects of scattering response, and the test of LAMPODS on uncontrolled subjects. With special-made targets, the results indicate that the LAMPODS system can distinguish consistently the four produced random surface roughnesses, regardless of the subjects? Spectroscopic signature, and can separate the spectroscopic features independently. Various natural and man-made targets were tested to challenge the LAMPODS system capability and found many interesting features regarding spectral response, polarimetric response, and
Varela, Solmar; Medina, Ernesto; López, Floralba; Mujica, Vladimiro
2014-01-08
We analyze single scattering of unpolarized photoelectrons through a monolayer of chiral molecules modeled by a continuous hardcore helix and spin-orbit coupling. The molecular helix is represented by an optical contact potential containing a non-hermitian component describing inelastic events. Transmitted photoelectrons are transversely polarized at optimal angles, and separated into up and down spin with up to 20% efficiency. Such a process involves the interference of both spin-orbit and inelastic strengths, that are parameterized quantitatively to recent experiments in chiral self-assembled monolayers (SAMs). The structure factor of the model chiral molecule shows the energy dependence of the differential cross section which decays strongly as energy increases. Larger incident momenta reduce axial deviations from the forward direction and the spin-orbit interaction becomes less effective. Transverse electron polarization is then restricted to a characteristic energy window.
Impact of model structure and parameterization on Penman-Monteith type evaporation models
NASA Astrophysics Data System (ADS)
Ershadi, A.; McCabe, M. F.; Evans, J. P.; Wood, E. F.
2015-06-01
The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf
NASA Astrophysics Data System (ADS)
Li, F.; Zeng, X. D.; Levis, S.
2012-07-01
A process-based fire parameterization of intermediate complexity has been developed for global simulations in the framework of a Dynamic Global Vegetation Model (DGVM) in an Earth System Model (ESM). Burned area in a grid cell is estimated by the product of fire counts and average burned area of a fire. The scheme comprises three parts: fire occurrence, fire spread, and fire impact. In the fire occurrence part, fire counts rather than fire occurrence probability are calculated in order to capture the observed high burned area fraction in areas of high fire frequency and realize parameter calibration based on MODIS fire counts product. In the fire spread part, post-fire region of a fire is assumed to be elliptical in shape. Mathematical properties of ellipses and some mathematical derivations are applied to improve the equation and assumptions of an existing fire spread parameterization. In the fire impact part, trace gas and aerosol emissions due to biomass burning are estimated, which offers an interface with atmospheric chemistry and aerosol models in ESMs. In addition, flexible time-step length makes the new fire parameterization easily applied to various DGVMs. Global performance of the new fire parameterization is assessed by using an improved version of the Community Land Model version 3 with the Dynamic Global Vegetation Model (CLM-DGVM). Simulations are compared against the latest satellite-based Global Fire Emission Database version 3 (GFED3) for 1997-2004. Results show that simulated global totals and spatial patterns of burned area and fire carbon emissions, regional totals and spreads of burned area, global annual burned area fractions for various vegetation types, and interannual variability of burned area are reasonable, and closer to GFED3 than CLM-DGVM simulations with the commonly used Glob-FIRM fire parameterization and the old fire module of CLM-DGVM. Furthermore, average error of simulated trace gas and aerosol emissions due to biomass burning
Sensitivity of the recent methane budget to LMDz sub-grid scale physical parameterizations
NASA Astrophysics Data System (ADS)
Locatelli, R.; Bousquet, P.; Saunois, M.; Chevallier, F.; Cressot, C.
2015-04-01
With the densification of surface observing networks and the development of remote sensing of greenhouse gases from space, estimations of methane (CH4) sources and sinks by inverse modelling face new challenges. Indeed, the chemical transport model used to link the flux space with the mixing ratio space must be able to represent these different types of constraints for providing consistent flux estimations. Here we quantify the impact of sub-grid scale physical parameterization errors on the global methane budget inferred by inverse modelling using the same inversion set-up but different physical parameterizations within one chemical-transport model. Two different schemes for vertical diffusion, two others for deep convection, and one additional for thermals in the planetary boundary layer are tested. Different atmospheric methane datasets are used as constraints (surface observations or satellite retrievals). At the global scale, methane emissions differ, on average, from 4.1 Tg CH4 per year due to the use of different sub-grid scale parameterizations. Inversions using satellite total-column retrieved by GOSAT satellite are less impacted, at the global scale, by errors in physical parameterizations. Focusing on large-scale atmospheric transport, we show that inversions using the deep convection scheme of Emanuel (1991) derive smaller interhemispheric gradient in methane emissions. At regional scale, the use of different sub-grid scale parameterizations induces uncertainties ranging from 1.2 (2.7%) to 9.4% (14.2%) of methane emissions in Africa and Eurasia Boreal respectively when using only surface measurements from the background (extended) surface network. When using only satellite data, we show that the small biases found in inversions using GOSAT-CH4 data and a coarser version of the transport model were actually masking a poor representation of the stratosphere-troposphere gradient in the model. Improving the stratosphere-troposphere gradient reveals a larger
Scattering of fermions by gravitons
NASA Astrophysics Data System (ADS)
Ulhoa, S. C.; Santos, A. F.; Khanna, Faqir C.
2017-04-01
The interaction between gravitons and fermions is investigated in the teleparallel gravity. The scattering of fermions and gravitons in the weak field approximation is analyzed. The transition amplitudes of M\\varnothing ller, Compton and new gravitational scattering are calculated.
Interface scattering in polycrystalline thermoelectrics
Popescu, Adrian; Haney, Paul M.
2014-03-28
We study the effect of electron and phonon interface scattering on the thermoelectric properties of disordered, polycrystalline materials (with grain sizes larger than electron and phonons' mean free path). Interface scattering of electrons is treated with a Landauer approach, while that of phonons is treated with the diffuse mismatch model. The interface scattering is embedded within a diffusive model of bulk transport, and we show that, for randomly arranged interfaces, the overall system is well described by effective medium theory. Using bulk parameters similar to those of PbTe and a square barrier potential for the interface electron scattering, we identify the interface scattering parameters for which the figure of merit ZT is increased. We find the electronic scattering is generally detrimental due to a reduction in electrical conductivity; however, for sufficiently weak electronic interface scattering, ZT is enhanced due to phonon interface scattering.
Acoustic bubble removal method
NASA Technical Reports Server (NTRS)
Trinh, E. H.; Elleman, D. D.; Wang, T. G. (Inventor)
1983-01-01
A method is described for removing bubbles from a liquid bath such as a bath of molten glass to be used for optical elements. Larger bubbles are first removed by applying acoustic energy resonant to a bath dimension to drive the larger bubbles toward a pressure well where the bubbles can coalesce and then be more easily removed. Thereafter, submillimeter bubbles are removed by applying acoustic energy of frequencies resonant to the small bubbles to oscillate them and thereby stir liquid immediately about the bubbles to facilitate their breakup and absorption into the liquid.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
Energy dependence of scatter components in multispectral PET imaging.
Bentourkia, M; Msaki, P; Cadorette, J; Lecomte, R
1995-01-01
High resolution images in PET based on small individual detectors are obtained at the cost of low sensitivity and increased detector scatter. These limitations can be partially overcome by enlarging discrimination windows to include more low-energy events and by developing more efficient energy-dependent methods to correct for scatter radiation from all sources. The feasibility of multispectral scatter correction was assessed by decomposing response functions acquired in multiple energy windows into four basic components: object, collimator and detector scatter, and trues. The shape and intensity of these components are different and energy-dependent. They are shown to contribute to image formation in three ways: useful (true), potentially useful (detector scatter), and undesirable (object and collimator scatter) information to the image over the entire energy range. With the Sherbrooke animal PET system, restoration of detector scatter in every energy window would allow nearly 90% of all detected events to participate in image formation. These observations suggest that multispectral acquisition is a promising solution for increasing sensitivity in high resolution PET. This can be achieved without loss of image quality if energy-dependent methods are made available to preserve useful events as potentially useful events are restored and undesirable events removed.
Review and current status of SPECT scatter correction
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Buvat, Irène; Beekman, Freek J.
2011-07-01
Detection of scattered gamma quanta degrades image contrast and quantitative accuracy of single-photon emission computed tomography (SPECT) imaging. This paper reviews methods to characterize and model scatter in SPECT and correct for its image degrading effects, both for clinical and small animal SPECT. Traditionally scatter correction methods were limited in accuracy, noise properties and/or generality and were not very widely applied. For small animal SPECT, these approximate methods of correction are often sufficient since the fraction of detected scattered photons is small. This contrasts with patient imaging where better accuracy can lead to significant improvement of image quality. As a result, over the last two decades, several new and improved scatter correction methods have been developed, although often at the cost of increased complexity and computation time. In concert with (i) the increasing number of energy windows on modern SPECT systems and (ii) excellent attenuation maps provided in SPECT/CT, some of these methods give new opportunities to remove degrading effects of scatter in both standard and complex situations and therefore are a gateway to highly quantitative single- and multi-tracer molecular imaging with improved noise properties. Widespread implementation of such scatter correction methods, however, still requires significant effort.
Coherent Scatter Imaging Measurements
NASA Astrophysics Data System (ADS)
Ur Rehman, Mahboob
In conventional radiography, anatomical information of the patients can be obtained, distinguishing different tissue types, e.g. bone and soft tissue. However, it is difficult to obtain appreciable contrast between two different types of soft tissues. Instead, coherent x-ray scattering can be utilized to obtain images which can differentiate between normal and cancerous cells of breast. An x-ray system using a conventional source and simple slot apertures was tested. Materials with scatter signatures that mimic breast cancer were buried in layers of fat of increasing thickness and imaged. The result showed that the contrast and signal to noise ratio (SNR) remained high even with added fat layers and short scan times.
Syzygies probing scattering amplitudes
NASA Astrophysics Data System (ADS)
Chen, Gang; Liu, Junyu; Xie, Ruofei; Zhang, Hao; Zhou, Yehao
2016-09-01
We propose a new efficient algorithm to obtain the locally minimal generating set of the syzygies for an ideal, i.e. a generating set whose proper subsets cannot be generating sets. Syzygy is a concept widely used in the current study of scattering amplitudes. This new algorithm can deal with more syzygies effectively because a new generation of syzygies is obtained in each step and the irreducibility of this generation is also verified in the process. This efficient algorithm can also be applied in getting the syzygies for the modules. We also show a typical example to illustrate the potential application of this method in scattering amplitudes, especially the Integral-By-Part(IBP) relations of the characteristic two-loop diagrams in the Yang-Mills theory.
Cable, J.W.
1987-01-01
The diffuse scattering of neutrons from magnetic materials provides unique and important information regarding the spatial correlations of the atoms and the spins. Such measurements have been extensively applied to magnetically ordered systems, such as the ferromagnetic binary alloys, for which the observed correlations describe the magnetic moment fluctuations associated with local environment effects. With the advent of polarization analysis, these techniques are increasingly being applied to study disordered paramagnetic systems such as the spin-glasses and the diluted magnetic semiconductors. The spin-pair correlations obtained are essential in understanding the exchange interactions of such systems. In this paper, we describe recent neutron diffuse scattering results on the atom-pair and spin-pair correlations in some of these disordered magnetic systems. 56 refs.
Vernon, M.F.
1983-07-01
The molecular-beam technique has been used in three different experimental arrangements to study a wide range of inter-atomic and molecular forces. Chapter 1 reports results of a low-energy (0.2 kcal/mole) elastic-scattering study of the He-Ar pair potential. The purpose of the study was to accurately characterize the shape of the potential in the well region, by scattering slow He atoms produced by expanding a mixture of He in N/sub 2/ from a cooled nozzle. Chapter 2 contains measurements of the vibrational predissociation spectra and product translational energy for clusters of water, benzene, and ammonia. The experiments show that most of the product energy remains in the internal molecular motions. Chapter 3 presents measurements of the reaction Na + HCl ..-->.. NaCl + H at collision energies of 5.38 and 19.4 kcal/mole. This is the first study to resolve both scattering angle and velocity for the reaction of a short lived (16 nsec) electronic excited state. Descriptions are given of computer programs written to analyze molecular-beam expansions to extract information characterizing their velocity distributions, and to calculate accurate laboratory elastic-scattering differential cross sections accounting for the finite apparatus resolution. Experimental results which attempted to determine the efficiency of optically pumping the Li(2/sup 2/P/sub 3/2/) and Na(3/sup 2/P/sub 3/2/) excited states are given. A simple three-level model for predicting the steady-state fraction of atoms in the excited state is included.
2010-09-01
we refer to the linear polarization as parallel if the polarization vector is in the scattering plane or perpendicular if the polarization vector is...obvious that the different polarization states can all be represented as linear combinations of any of the independent pairs of polarization states...J.C. (1976) “Improvement of underwater visibility by reduction of backscatter with a circular polarization technique, Applied Optics, 6, 321-330
NASA Astrophysics Data System (ADS)
Bonelli, G.; Bonora, L.; Nesti, F.; Tomasiello, A.; Terna, S.
This is a review of some recent developments in the study of classical solutions of Yang-Mills theories in various dimensions and their significance in the path integral of the corresponding theories. These particular solutions are called instantons because of their kinship with ordinary instantons. Just as ordinary instantons interpolate between different vacua, the new instantons interpolate between different asymptotic states. Therefore they represent scattering phenomena. Here we review the two dimensional and four dimensional Yang-Mills case.
Inverse Scattering and Tomography
1989-11-27
404. [3] J. Duchon, Interpolation des Fonctions de Deux Variables Suivant le Principe de la Flexion des Plaques Minces, RAIRO Analyse Numerique, 10...d’Interpolation des Fonctions de Pusleurs Variables par les D M-splines, RAIRO Analyse Numerigue 12 (1978), 325 - 334. [6] R. Franke, Scattered data...splines, RAIRO Analyse Numerigue 12 (1978), 325-334. [7] I. M. Gelfand and N. Ya. Vilenkin, Generalized Functions, Vol. 4, Academic Press, New York
Predicting X-ray diffuse scattering from translation-libration-screw structural ensembles.
Van Benschoten, Andrew H; Afonine, Pavel V; Terwilliger, Thomas C; Wall, Michael E; Jackson, Colin J; Sauter, Nicholas K; Adams, Paul D; Urzhumtsev, Alexandre; Fraser, James S
2015-08-01
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier's equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation-libration-screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.
2015-07-28
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier's equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; ...
2015-07-28
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier'smore » equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less
Neutron scattering in Australia
Knott, R.B.
1994-12-31
Neutron scattering techniques have been part of the Australian scientific research community for the past three decades. The High Flux Australian Reactor (HIFAR) is a multi-use facility of modest performance that provides the only neutron source in the country suitable for neutron scattering. The limitations of HIFAR have been recognized and recently a Government initiated inquiry sought to evaluate the future needs of a neutron source. In essence, the inquiry suggested that a delay of several years would enable a number of key issues to be resolved, and therefore a more appropriate decision made. In the meantime, use of the present source is being optimized, and where necessary research is being undertaken at major overseas neutron facilities either on a formal or informal basis. Australia has, at present, a formal agreement with the Rutherford Appleton Laboratory (UK) for access to the spallation source ISIS. Various aspects of neutron scattering have been implemented on HIFAR, including investigations of the structure of biological relevant molecules. One aspect of these investigations will be presented. Preliminary results from a study of the interaction of the immunosuppressant drug, cyclosporin-A, with reconstituted membranes suggest that the hydrophobic drug interdigitated with lipid chains.
NASA Astrophysics Data System (ADS)
Xie, Ya-Ming; Ji, Xia
Nowadays, with the development of technology, particles with size at nanoscale have been synthesized in experiments. It is noticed that anisotropy is an unavoidable problem in the production of nanospheres. Besides, nonspherical nanoparticles have also been extensively used in experiments. Comparing with spherical model, spheroidal model can give a better description for the characteristics of nonspherical particles. Thus the study of analytical solution for light scattering by spheroidal particles has practical implications. By expanding incident, scattered, and transmitted electromagnetic fields in terms of appropriate vector spheroidal wave functions, an analytic solution is obtained to the problem of light scattering by spheroids. Unknown field expansion coefficients can be determined with the combination of boundary conditions and rotational-translational addition theorems for vector spheroidal wave functions. Based on the theoretical derivation, a Fortran code has been developed to calculate the extinction cross section and field distribution, whose results agree well with those obtain by FDTD simulation. This research is supported by the National Natural Science Foundation of China No. 91230203.
Nanowire Electron Scattering Spectroscopy
NASA Technical Reports Server (NTRS)
Hunt, Brian; Bronikowsky, Michael; Wong, Eric; VonAllmen, Paul; Oyafuso, Fablano
2009-01-01
Nanowire electron scattering spectroscopy (NESS) has been proposed as the basis of a class of ultra-small, ultralow-power sensors that could be used to detect and identify chemical compounds present in extremely small quantities. State-of-the-art nanowire chemical sensors have already been demonstrated to be capable of detecting a variety of compounds in femtomolar quantities. However, to date, chemically specific sensing of molecules using these sensors has required the use of chemically functionalized nanowires with receptors tailored to individual molecules of interest. While potentially effective, this functionalization requires labor-intensive treatment of many nanowires to sense a broad spectrum of molecules. In contrast, NESS would eliminate the need for chemical functionalization of nanowires and would enable the use of the same sensor to detect and identify multiple compounds. NESS is analogous to Raman spectroscopy, the main difference being that in NESS, one would utilize inelastic scattering of electrons instead of photons to determine molecular vibrational energy levels. More specifically, in NESS, one would exploit inelastic scattering of electrons by low-lying vibrational quantum states of molecules attached to a nanowire or nanotube.
The interpretation of remotely sensed cloud properties from a model parameterization perspective
1995-09-01
The goals of ISCCP and FIRE are, broadly speaking, to provide methods for the retrieval of cloud properties from satellites, and to improve cloud radiation models and the parameterization of clouds in GCMs. This study suggests a direction for GCM cloud parameterizations based on analysis of Landsat and ISCCP satellite data. For low level single layer clouds it is found that the mean retrieved liquid water pathe in cloudy pixels is essentially invariant to the cloud fraction, at least in the range 0.2 - 0.8. This result is very important since it allows the cloud fraction to be estimated if the mean liquid water path of cloud in a general circulation model gridcell is known. 3 figs.
An efficient numerical model for hydrodynamic parameterization in 2D fractured dual-porosity media
NASA Astrophysics Data System (ADS)
Fahs, Hassane; Hayek, Mohamed; Fahs, Marwan; Younes, Anis
2014-01-01
This paper presents a robust and efficient numerical model for the parameterization of the hydrodynamic in fractured porous media. The developed model is based upon the refinement indicators algorithm for adaptive multi-scale parameterization. For each level of refinement, the Levenberg-Marquardt method is used to minimize the difference between the measured and predicted data that are obtained by solving the direct problem with the mixed finite element method. Sensitivities of state variables with respect to the parameters are calculated by the sensitivity method. The adjoint-state method is used to calculate the local gradients of the objective function necessary for the computation of the refinement indicators. Validity and efficiency of the proposed model are demonstrated by means of several numerical experiments. The developed numerical model provides encouraging results, even for noisy data and/or with a reduced number of measured heads.
Parameterization of a geometrical reaction time model for two beam nacelle lidars
NASA Astrophysics Data System (ADS)
Beuth, Thorsten; Fox, Maik; Stork, Wilhelm
2015-09-01
The reaction time model is briefly reintroduced as published in a previous publication to explain the restrictions of detecting a horizontal homogenous wind field by two beams of a LiDAR placed on a wind turbine's nacelle. The model is parameterized to get more general statements for a beneficial system design concept. This approach is based on a parameterization towards the rotor disc radius R. All other parameters, whether they are distances like the measuring length or velocities like the cut-out wind speed, can be expressed by the rotor disc radius R. A review of state-of-the-art commercially available wind turbines and their size and rotor diameter is given to estimate the minimum measuring distances that will benefit most wind turbine systems in present as well as in the near future. In the end, the requirements are matched against commercially available LiDARs to show the necessity to advance such systems.