Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
NASA Astrophysics Data System (ADS)
Fangohr, Susanne; Woolf, David K.
2007-06-01
One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes independently. We then apply our parameterization to a monthly TOPEX altimeter gridded 1.5° × 1.5° data set and compare our results to transfer velocities calculated using the popular wind-based k parameterizations by Wanninkhof [Wanninkhof, R., 1992. Relationship between wind speed and gas exchange over the ocean. J. Geophys. Res., 97: 7373-7382.] and Wanninkhof and McGillis [Wanninkhof, R. and McGillis, W., 1999. A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13): 1889-1892]. We show that despite good agreement of the globally averaged transfer velocities, global and regional fluxes differ by up to 100%. These discrepancies are a result of different spatio-temporal distributions of the processes involved in the parameterizations of k, indicating the importance of wave field parameters and a need for further validation.
Development of the PCAD Model to Assess Biological Significance of Acoustic Disturbance
2015-09-30
We identified northern elephant seals and Atlantic bottlenose dolphins as the best species to parameterize the PCAD model. These species represent...transfer functions described above for southern elephant seals, our goals are to parameterize these models to make them applicable to other species and...northern elephant seal demographic data to estimate adult female survival, reproduction, and pup survival as a function of maternal condition. As a major
Matrix Transfer Function Design for Flexible Structures: An Application
NASA Technical Reports Server (NTRS)
Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.
1985-01-01
The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the computation of light absorption and scattering by complex and inhomogeneous particles for application to aggregates and snow grains with external and internal mixing structures. We demonstrated that a small black (BC) particle on the order of 1 μm internally mixed with snow grains could effectively reduce visible snow albedo by as much as 5–10%. Following this work and within the context of DOE support, we have made two key accomplishments presented in the attached final report.« less
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-02-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-06-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTMmore » runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy Laboratory at http://nsrdb.nrel.gov.« less
NASA Technical Reports Server (NTRS)
Suarex, Max J. (Editor); Chou, Ming-Dah
1994-01-01
A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.
Albert, A; Mobley, C
2003-11-03
Subsurface remote sensing signals, represented by the irradiance re fl ectance and the remote sensing re fl ectance, were investigated. The present study is based on simulations with the radiative transfer program Hydrolight using optical properties of Lake Constance (German: Bodensee) based on in-situ measurements of the water constituents and the bottom characteristics. Analytical equations are derived for the irradiance re fl ectance and remote sensing re fl ectance for deep and shallow water applications. The input of the parameterization are the inherent optical properties of the water - absorption a(lambda) and backscattering bb(lambda). Additionally, the solar zenith angle thetas, the viewing angle thetav , and the surface wind speed u are considered. For shallow water applications the bottom albedo RB and the bottom depth zB are included into the parameterizations. The result is a complete set of analytical equations for the remote sensing signals R and Rrs in deep and shallow waters with an accuracy better than 4%. In addition, parameterizations of apparent optical properties were derived for the upward and downward diffuse attenuation coefficients Ku and Kd.
NASA Technical Reports Server (NTRS)
Mihalas, D.; Kunasz, P. B.
1978-01-01
The coupled radiative transfer and statistical equilibrium equations for multilevel ionic structures in the atmospheres of early-type stars are solved. Both lines and continua are treated consistently; the treatment is applicable throughout a transonic wind, and allows for the presence of background continuum sources and sinks in the transfer. An equivalent-two-level-atoms approach provides the solution for the equations. Calculations for simplified He (+)-like model atoms in parameterized isothermal wind models indicate that subordinate line profiles are sensitive to the assumed mass-loss rate, and to the assumed structure of the velocity law in the atmospheres.
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. A. Bogacz; V. A. Lebedev
2002-11-21
The Courant-Snyder parameterization of one-dimensional linear betatron motion is generalized to two-dimensional coupled linear motion. To represent the 4 x 4 symplectic transfer matrix the following ten parameters were chosen: four beta-functions, four alpha-functions and two betatron phase advances which have a meaning similar to the Courant-Snyder parameterization. Such a parameterization works equally well for weak and strong coupling and can be useful for analysis of coupled betatron motion in circular accelerators as well as in transfer lines. Similarly, the transfer matrix, the bilinear form describing the phase space ellipsoid and the second order moments are related to the eigen-vectors.more » Corresponding equations can be useful in interpreting tracking results and experimental data.« less
NASA Astrophysics Data System (ADS)
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
NASA Astrophysics Data System (ADS)
Schneider, F. D.; Leiterer, R.; Morsdorf, F.; Gastellu-Etchegorry, J.; Lauret, N.; Pfeifer, N.; Schaepman, M. E.
2013-12-01
Remote sensing offers unique potential to study forest ecosystems by providing spatially and temporally distributed information that can be linked with key biophysical and biochemical variables. The estimation of biochemical constituents of leaves from remotely sensed data is of high interest revealing insight on photosynthetic processes, plant health, plant functional types, and speciation. However, the scaling of observations at the canopy level to the leaf level or vice versa is not trivial due to the structural complexity of forests. Thus, a common solution for scaling spectral information is the use of physically-based radiative transfer models. The discrete anisotropic radiative transfer model (DART), being one of the most complete coupled canopy-atmosphere 3D radiative transfer models, was parameterized based on airborne and in-situ measurements. At-sensor radiances were simulated and compared with measurements from an airborne imaging spectrometer. The study was performed on the Laegern site, a temperate mixed forest characterized by steep slopes, a heterogeneous spectral background, and deciduous and coniferous trees at different development stages (dominated by beech trees; 47°28'42.0' N, 8°21'51.8' E, 682 m asl, Switzerland). It is one of the few studies conducted on an old-growth forest. Particularly the 3D modeling of the complex canopy architecture is crucial to model the interaction of photons with the vegetation canopy and its background. Thus, we developed two forest reconstruction approaches: 1) based on a voxel grid, and 2) based on individual tree detection. Both methods are transferable to various forest ecosystems and applicable at scales between plot and landscape. Our results show that the newly developed voxel grid approach is favorable over a parameterization based on individual trees. In comparison to the actual imaging spectrometer data, the simulated images exhibit very similar spatial patterns, whereas absolute radiance values are partially differing depending on the respective wavelength. We conclude that our proposed method provides a representation of the 3D radiative regime within old-growth forests that is suitable for simulating most spectral and spatial features of imaging spectrometer data. It indicates the potential of simulating future Earth observation missions, such as ESA's Sentinel-2. However, the high spectral variability of leaf optical properties among species has to be addressed in future radiative transfer modeling. The results further reveal that research emphasis has to be put on the accurate parameterization of small-scale structures, such as the clumping of needles into shoots or the distribution of leaf angles.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Thermodynamic properties for applications in chemical industry via classical force fields.
Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran
2012-01-01
Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Kokhanovsky, Alexander; Guyot, Gwennole; Jourdan, Olivier; Nousiainen, Timo
2015-04-01
Snow consists of non-spherical ice grains of various shapes and sizes, which are surrounded by air and sometimes covered by films of liquid water. Still, in many studies, homogeneous spherical snow grains have been assumed in radiative transfer calculations, due to the convenience of using Mie theory. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat scattering phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ=0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function as functions of the size parameter and the real and imaginary parts of the refractive index. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons with spheres and distorted Koch fractals. Further evaluation and validation of the proposed approach against (e.g.) bidirectional reflectance and polarization measurements for snow is planned. At any rate, it seems safe to assume that the OHC selected here provides a substantially better basis for representing the single-scattering properties of snow than spheres do. Moreover, the parameterizations developed here are analytic and simple to use, and they can also be applied to the treatment of dirty snow following (e.g.) the approach of Kokhanovsky (The Cryosphere, 7, 1325-1331, doi:10.5194/tc-7-1325-2013, 2013). This should make them an attractive option for use in radiative transfer applications involving snow.
A transferable force field for CdS-CdSe-PbS-PbSe solid systems
NASA Astrophysics Data System (ADS)
Fan, Zhaochuan; Koster, Rik S.; Wang, Shuaiwei; Fang, Changming; Yalcin, Anil O.; Tichelaar, Frans D.; Zandbergen, Henny W.; van Huis, Marijn A.; Vlugt, Thijs J. H.
2014-12-01
A transferable force field for the PbSe-CdSe solid system using the partially charged rigid ion model has been successfully developed and was used to study the cation exchange in PbSe-CdSe heteronanocrystals [A. O. Yalcin et al., "Atomic resolution monitoring of cation exchange in CdSe-PbSe heteronanocrystals during epitaxial solid-solid-vapor growth," Nano Lett. 14, 3661-3667 (2014)]. In this work, we extend this force field by including another two important binary semiconductors, PbS and CdS, and provide detailed information on the validation of this force field. The parameterization combines Bader charge analysis, empirical fitting, and ab initio energy surface fitting. When compared with experimental data and density functional theory calculations, it is shown that a wide range of physical properties of bulk PbS, PbSe, CdS, CdSe, and their mixed phases can be accurately reproduced using this force field. The choice of functional forms and parameterization strategy is demonstrated to be rational and effective. This transferable force field can be used in various studies on II-VI and IV-VI semiconductor materials consisting of CdS, CdSe, PbS, and PbSe. Here, we demonstrate the applicability of the force field model by molecular dynamics simulations whereby transformations are initiated by cation exchange.
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K.
2018-05-01
Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion and mass transfer limitations, i.e., the exchange of mass between the permeable matrix and the low permeability inclusions. We illustrate the physical meaning of the method and we show how the block longitudinal dispersivity approaches, under certain conditions, the Scheidegger limit at large Péclet numbers. Lastly, we discuss the potential and limitations of the method to accurately describe dispersion in solute transport applications in heterogeneous aquifers.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
NASA Technical Reports Server (NTRS)
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2014-01-01
The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.
Advances in quantifying air-sea gas exchange and environmental forcing.
Wanninkhof, Rik; Asher, William E; Ho, David T; Sweeney, Colm; McGillis, Wade R
2009-01-01
The past decade has seen a substantial amount of research on air-sea gas exchange and its environmental controls. These studies have significantly advanced the understanding of processes that control gas transfer, led to higher quality field measurements, and improved estimates of the flux of climate-relevant gases between the ocean and atmosphere. This review discusses the fundamental principles of air-sea gas transfer and recent developments in gas transfer theory, parameterizations, and measurement techniques in the context of the exchange of carbon dioxide. However, much of this discussion is applicable to any sparingly soluble, non-reactive gas. We show how the use of global variables of environmental forcing that have recently become available and gas exchange relationships that incorporate the main forcing factors will lead to improved estimates of global and regional air-sea gas fluxes based on better fundamental physical, chemical, and biological foundations.
Bridging the Radiative Transfer Models for Meteorology and Solar Energy Applications
NASA Astrophysics Data System (ADS)
Xie, Y.; Sengupta, M.
2017-12-01
Radiative transfer models are used to compute solar radiation reaching the earth surface and play an important role in both meteorology and solar energy studies. Therefore, they are designed to meet the needs of specialized applications. For instance, radiative transfer models for meteorology seek to provide more accurate cloudy-sky radiation compared to models used in solar energy that are geared towards accuracy in clear-sky conditions associated with the maximum solar resource. However, models for solar energy applications are often computationally faster, as the complex solution of the radiative transfer equation is parameterized by atmospheric properties that can be acquired from surface- or satellite-based observations. This study introduces the National Renewable Energy Laboratory's (NREL's) recent efforts to combine the advantages of radiative transfer models designed for meteorology and solar energy applictions. A fast all-sky radiation model, FARMS-NIT, was developed to efficiently compute narrowband all-sky irradiances over inclined photovoltaic (PV) panels. This new model utilizes the optical preperties from a solar energy model, SMARTS, to computes surface radiation by considering all possible paths of photon transmission and the relevent scattering and absorption attenuation. For cloudy-sky conditions, cloud bidirectional transmittance functions (BTDFs) are provided by a precomputed lookup table (LUT) by LibRadtran. Our initial results indicate that FARMS-NIT has an accuracy that is similar to LibRadtran, a highly accurate multi-stream model, but is significantly more efficient. The development and validation of this model will be presented.
NASA Astrophysics Data System (ADS)
Soloviev, Alexander; Schluessel, Peter
The model presented contains interfacial, bubble-mediated, ocean mixed layer, and remote sensing components. The interfacial (direct) gas transfer dominates under conditions of low and—for quite soluble gases like CO2—moderate wind speeds. Due to the similarity between the gas and heat transfer, the temperature difference, ΔT, across the thermal molecular boundary layer (cool skin of the ocean) and the interfacial gas transfer coefficient, Kint are presumably interrelated. A coupled parameterization for ΔT and Kint has been derived in the context of a surface renewal model [Soloviev and Schluessel, 1994]. In addition to the Schmidt, Sc, and Prandtl, Pr, numbers, the important parameters are the surface Richardson number, Rƒ0, and the Keulegan number, Ke. The more readily available cool skin data are used to determine the coefficients that enter into both parameterizations. At high wind speeds, the Ke-number dependence is further verified with the formula for transformation of the surface wind stress to form drag and white capping, which follows from the renewal model. A further extension of the renewal model includes effects of solar radiation and rainfall. The bubble-mediated component incorporates the Merlivat et al. [1993] parameterization with the empirical coefficients estimated by Asher and Wanninkhof [1998]. The oceanic mixed layer component accounts for stratification effects on the air-sea gas exchange. Based on the example of GasEx-98, we demonstrate how the results of parameterization and modeling of the air-sea gas exchange can be extended to the global scale, using remote sensing techniques.
NASA Astrophysics Data System (ADS)
Alipour, Mojtaba; Karimi, Niloofar
2017-06-01
Organic light emitting diodes (OLEDs) based on thermally activated delayed fluorescence (TADF) emitters are an attractive category of materials that have witnessed a booming development in recent years. In the present contribution, we scrutinize the accountability of parameterized and parameter-free single-hybrid (SH) and double-hybrid (DH) functionals through the two formalisms, full time-dependent density functional theory (TD-DFT) and Tamm-Dancoff approximation (TDA), for the estimation of photophysical properties like absorption energy, emission energy, zero-zero transition energy, and singlet-triplet energy splitting of TADF molecules. According to our detailed analyses on the performance of SHs based on TD-DFT and TDA, the TDA-based parameter-free SH functionals, PBE0 and TPSS0, with one-third of exact-like exchange turned out to be the best performers in comparison to other functionals from various rungs to reproduce the experimental data of the benchmarked set. Such affordable SH approximations can thus be employed to predict and design the TADF molecules with low singlet-triplet energy gaps for OLED applications. From another perspective, considering this point that both the nonlocal exchange and correlation are essential for a more reliable description of large charge-transfer excited states, applicability of the functionals incorporating these terms, namely, parameterized and parameter-free DHs, has also been evaluated. Perusing the role of exact-like exchange, perturbative-like correlation, solvent effects, and other related factors, we find that the parameterized functionals B2π-PLYP and B2GP-PLYP and the parameter-free models PBE-CIDH and PBE-QIDH have respectable performance with respect to others. Lastly, besides the recommendation of reliable computational protocols for the purpose, hopefully this study can pave the way toward further developments of other SHs and DHs for theoretical explorations in the field of OLEDs technology.
NASA Astrophysics Data System (ADS)
Menzel, R.; Paynter, D.; Jones, A. L.
2017-12-01
Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.
Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K
2017-09-15
Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion and mass transfer limitations, i.e., the exchange of mass between the permeable matrix and the low permeability inclusions. We illustrate the physical meaning of the method and we show how the block longitudinal dispersivity approaches, under certain conditions, the Scheidegger limit at large Péclet numbers. Lastly, we discuss the potential and limitations of the method to accurately describe dispersion in solute transport applications in heterogeneous aquifers. Copyright © 2017. Published by Elsevier B.V.
Should I use that model? Assessing the transferability of ecological models to new settings
Analysts and scientists frequently apply existing models that estimate ecological endpoints or simulate ecological processes to settings where the models have not been used previously, and where data to parameterize and validate the model may be sparse. Prior to transferring an ...
NASA Astrophysics Data System (ADS)
Jørgensen, E. T.; Sørensen, L. L.; Jensen, B.; Sejr, M. K.
2012-04-01
The air-sea exchange of CO2 or CO2 flux is driven by the difference in the partial pressure of CO2 in the water and the atmosphere (ΔpCO2), the solubility of CO2 (K0) and the gas transfer velocity (k) (Wanninkhof et al., 2009;Weiss, 1974) . ΔpCO2 and K0 are determined with relatively high precision and it is estimated that the biggest uncertainty when modelling the air-sea flux is the parameterization of k. As an example; the estimated global air-sea flux increases by 70 % when using the parameterization by Wanninkhof and McGillis (1999) instead of Wanninkhof (1992) (Rutgersson et al., 2008). In coastal areas the uncertainty is even higher and only few studies have focused on determining transfer velocity for the coastal waters and even fewer on estuaries (Borges et al., 2004;Rutgersson et al., 2008). The transfer velocity (k600) of CO2 in the inner estuary of Roskilde Fjord, Denmark was investigated using eddy covariance CO2 fluxes (ECM) and directly measured ΔpCO2 during May and June 2010. The data was strictly sorted to heighten the certainty of the results and the outcome was; DS1; using only ECM, and DS2; including the inertial dissipation method (IDM). The inner part of Roskilde Fjord showed to be a very biological active CO2 sink and preliminary results showed that the average k600 was more than 10 times higher than transfer velocities from similar studies of other coastal areas. The much higher transfer velocities were estimated to be caused by the greater fetch and shallower water in Roskilde Fjord, which indicated that turbulence in both air and water influence k600. The wind speed parameterization of k600 using DS1 showed some scatter but when including IDM the r2 of DS2 reached 0.93 with an exponential parameterization, where U10 was based on the Businger-Dyer relationships using friction velocity and atmospheric stability. This indicates that some of the uncertainties coupled with CO2 fluxes calculated by the ECM are removed when including the IDM.
NASA Astrophysics Data System (ADS)
Hall, Carlton Raden
A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf thickness Ltadj, LAI, and h (m). Its function is to translate leaf level estimates of diffuse absorption and backscatter to the canopy scale allowing the leaf optical properties to directly influence above canopy estimates of reflectance. The model was successfully modified and parameterized to operate in a canopy scale and a leaf scale mode. Canopy scale model simulations produced the best results. Simulations based on leaf derived coefficients produced calculated above canopy reflectance errors of 15% to 18%. A comprehensive sensitivity analyses indicated the most important parameters were beam to diffuse conversion c(lambda, m-1), diffuse absorption a(lambda, m-1), diffuse backscatter b(lambda, m-1), h (m), Q, and direct and diffuse irradiance. Sources of error include the estimation procedure for the direct beam to diffuse conversion and attenuation coefficients and other field and laboratory measurement and analysis errors. Applications of the model include creation of synthetic reflectance data sets for remote sensing algorithm development, simulations of stress and drought on vegetation reflectance signatures, and the potential to estimate leaf moisture and chemical status.
Lyapustin, Alexei
2002-09-20
Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green's function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2-3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei
2002-09-01
Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green ’s function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2 -3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
NASA Astrophysics Data System (ADS)
Fisher, A. W.; Sanford, L. P.; Scully, M. E.; Suttles, S. E.
2016-02-01
Enhancement of wind-driven mixing by Langmuir turbulence (LT) may have important implications for exchanges of mass and momentum in estuarine and coastal waters, but the transient nature of LT and observational constraints make quantifying its impact on vertical exchange difficult. Recent studies have shown that wind events can be of first order importance to circulation and mixing in estuaries, prompting this investigation into the ability of second-moment turbulence closure schemes to model wind-wave enhanced mixing in an estuarine environment. An instrumented turbulence tower was deployed in middle reaches of Chesapeake Bay in 2013 and collected observations of coherent structures consistent with LT that occurred under regions of breaking waves. Wave and turbulence measurements collected from a vertical array of Acoustic Doppler Velocimeters (ADVs) provided direct estimates of TKE, dissipation, turbulent length scale, and the surface wave field. Direct measurements of air-sea momentum and sensible heat fluxes were collected by a co-located ultrasonic anemometer deployed 3m above the water surface. Analyses of the data indicate that the combined presence of breaking waves and LT significantly influences air-sea momentum transfer, enhancing vertical mixing and acting to align stress in the surface mixed layer in the direction of Lagrangian shear. Here these observations are compared to the predictions of commonly used second-moment turbulence closures schemes, modified to account for the influence of wave breaking and LT. LT parameterizations are evaluated under neutrally stratified conditions and buoyancy damping parameterizations are evaluated under stably stratified conditions. We compare predicted turbulent quantities to observations for a variety of wind, wave, and stratification conditions. The effects of fetch-limited wave growth, surface buoyancy flux, and tidal distortion on wave mixing parameterizations will also be discussed.
NASA Astrophysics Data System (ADS)
Piskozub, Jacek; Wróbel, Iwona
2016-04-01
The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations. The first one is the fact that most of the k functions intersect close to 9 m/s, the typical North Atlantic wind speeds. The squared and cubed function need to intersect in order to have similar global averages. This way the higher values of cubic functions for strong winds are offset by higher values of squared ones for weak ones. The wind speed of the intersection has to be higher than global wind speed average because discrepancies between different parameterizations increase with the wind speed. The North Atlantic region seem to have by chance just the right average wind speeds to make all the parameterizations resulting in similar annual fluxes. However there is a second reason for smaller inter-parameterization discrepancies in the North Atlantic than many other ocean basins. The North Atlantic CO2 fluxes are downward in every month. In many regions of the world, the direction of the flux changes between the winter and summer with wind speeds much stronger in the cold season. We show, using the actual formulas that in such a case the differences between the parameterizations partly cancel out which is not the case when the flux never changes its direction. Both the mechanisms accidentally make the North Atlantic an area where the choice of k parameterizations causes very small flux uncertainty in annual fluxes. On the other hand, it makes the North Atlantic data not very useful for choosing the parameterizations most closely representing real fluxes.
Application of the CERES Flux-by-Cloud Type Simulator to GCM Output
NASA Technical Reports Server (NTRS)
Eitzen, Zachary; Su, Wenying; Xu, Kuan-Man; Loeb, Norman G.; Sun, Moguo; Doelling, David R.; Bodas-Salcedo, Alejandro
2016-01-01
The CERES Flux By CloudType data product produces CERES top-of-atmosphere (TOA) fluxes by region and cloud type. Here, the cloud types are defined by cloud optical depth (t) and cloud top pressure (pc), with bins similar to those used by ISCCP (International Satellite Cloud Climatology Project). This data product has the potential to be a powerful tool for the evaluation of the clouds produced by climate models by helping to identify which physical parameterizations have problems (e.g., boundary-layer parameterizations, convective clouds, processes that affect surface albedo). Also, when the flux-by-cloud type and frequency of cloud types are simultaneously used to evaluate a model, the results can determine whether an unrealistically large or small occurrence of a given cloud type has an important radiative impact for a given region. A simulator of the flux-by-cloud type product has been applied to three-hourly data from the year 2008 from the UK Met Office HadGEM2-A model using the Langley Fu-Lour radiative transfer model to obtain TOA SW and LW fluxes.
Tuning a physically-based model of the air-sea gas transfer velocity
NASA Astrophysics Data System (ADS)
Jeffery, C. D.; Robinson, I. S.; Woolf, D. K.
Air-sea gas transfer velocities are estimated for one year using a 1-D upper-ocean model (GOTM) and a modified version of the NOAA-COARE transfer velocity parameterization. Tuning parameters are evaluated with the aim of bringing the physically based NOAA-COARE parameterization in line with current estimates, based on simple wind-speed dependent models derived from bomb-radiocarbon inventories and deliberate tracer release experiments. We suggest that A = 1.3 and B = 1.0, for the sub-layer scaling parameter and the bubble mediated exchange, respectively, are consistent with the global average CO 2 transfer velocity k. Using these parameters and a simple 2nd order polynomial approximation, with respect to wind speed, we estimate a global annual average k for CO 2 of 16.4 ± 5.6 cm h -1 when using global mean winds of 6.89 m s -1 from the NCEP/NCAR Reanalysis 1 1954-2000. The tuned model can be used to predict the transfer velocity of any gas, with appropriate treatment of the dependence on molecular properties including the strong solubility dependence of bubble-mediated transfer. For example, an initial estimate of the global average transfer velocity of DMS (a relatively soluble gas) is only 11.9 cm h -1 whilst for less soluble methane the estimate is 18.0 cm h -1.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Application of Bayesian Networks to hindcast barrier island morphodynamics
Wilson, Kathleen E.; Adams, Peter N.; Hapke, Cheryl J.; Lentz, Erika E.; Brenner, Owen T.
2015-01-01
We refine a preliminary Bayesian Network by 1) increasing model experience through additional observations, 2) including anthropogenic modification history, and 3) replacing parameterized wave impact values with maximum run-up elevation. Further, we develop and train a pair of generalized models with an additional dataset encompassing a different storm event, which expands the observations beyond our hindcast objective. We compare the skill of the generalized models against the Nor'Ida specific model formulation, balancing the reduced skill with an expectation of increased transferability. Results of Nor'Ida hindcasts ranged in skill from 0.37 to 0.51 and accuracy of 65.0 to 81.9%.
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
A Method to Analyze How Various Parts of Clouds Influence Each Other's Brightness
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander; Lau, William (Technical Monitor)
2001-01-01
This paper proposes a method for obtaining new information on 3D radiative effects that arise from horizontal radiative interactions in heterogeneous clouds. Unlike current radiative transfer models, it can not only calculate how 3D effects change radiative quantities at any given point, but can also determine which areas contribute to these 3D effects, to what degree, and through what mechanisms. After describing the proposed method, the paper illustrates its new capabilities both for detailed case studies and for the statistical processing of large datasets. Because the proposed method makes it possible, for the first time, to link a particular change in cloud properties to the resulting 3D effect, in future studies it can be used to develop new radiative transfer parameterizations that would consider 3D effects in practical applications currently limited to 1D theory-such as remote sensing of cloud properties and dynamical cloud modeling.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
Heat transfer and vascular cambium necrosis in the boles of trees during surface fires
M. B. Dickinson
2002-01-01
Heat-transfer and cell-survival models are used to link surface fire behavior with vascular cambium necrosis from heating by flames. Vascular cambium cell survival was predicted with a numerical model based on the kinetics of protein denaturation and parameterized with data from the literature. Cell survival was predicted for vascular cambium temperature regimes...
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Liou, Kuo-Nan; Takano, Yoshihide
1993-01-01
The impact of using phase functions for spherical droplets and hexagonal ice crystals to analyze radiances from cirrus is examined. Adding-doubling radiative transfer calculations are employed to compute radiances for different cloud thicknesses and heights over various backgrounds. These radiances are used to develop parameterizations of top-of-the-atmosphere visible reflectance and IR emittance using tables of reflectances as a function of cloud optical depth, viewing and illumination angles, and microphysics. This parameterization, which includes Rayleigh scattering, ozone absorption, variable cloud height, and an anisotropic surface reflectance, reproduces the computed top-of-the-atmosphere reflectances with an accruacy of +/- 6 percent for four microphysical models: 10-micron water droplet, small symmetric crystal, cirrostratus, and cirrus uncinus. The accuracy is twice that of previous models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1990-01-01
Broad-band parameterizations for atmospheric radiative transfer were developed for clear and cloudy skies. These were in the shortwave and longwave regions of the spectrum. These models were compared with other models in an international effort called ICRCCM (Intercomparison of Radiation Codes for Climate Models). The radiation package developed was used for simulations of a General Circulation Model (GCM). A synopsis is provided of the research accomplishments in the two areas separately. Details are available in the published literature.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
NASA Technical Reports Server (NTRS)
Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen
2005-01-01
Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
NASA Astrophysics Data System (ADS)
Roy, M.; Rios, D.; Cosburn, K.
2017-12-01
Shear between the moving lithosphere and the underlying asthenospheric mantle can produce dynamic pressure gradients that control patterns of melt migration by percolative flow. Within continental interiors these pressure gradients may be large enough to focus melt migration into zones of low dynamic pressure and thus influence the surface distribution of magmatism. We build upon previous work to show that for a lithospheric keel that protrudes into the "mantle wind," spatially-variable melt migration can lead to spatially-variable thermal weakening of the lithosphere. Our models treat advective heat transfer in porous flow in the limit that heat transfer between the melt and surrounding matrix dominates over conductive heat transfer within either the melt or the solid alone. The models are parameterized by a heat transfer coefficient that we interpret to be related to the efficiency of heat transfer across the fluid-rock interface, related to the geometry and distribution of porosity. Our models quantitatively assess the viability of spatially variable thermal-weakening caused by melt-migration through continental regions that are characterized by variations in lithospheric thickness. We speculate upon the relevance of this process in producing surface patterns of Cenozoic magmatism and heatflow at the Colorado Plateau in the western US.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications
NASA Astrophysics Data System (ADS)
Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.
2017-12-01
Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .
NASA Technical Reports Server (NTRS)
Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.
2012-01-01
The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
Modeling the risk of water pollution by pesticides from imbalanced data.
Trajanov, Aneta; Kuzmanovski, Vladimir; Real, Benoit; Perreau, Jonathan Marks; Džeroski, Sašo; Debeljak, Marko
2018-04-30
The pollution of ground and surface waters with pesticides is a serious ecological issue that requires adequate treatment. Most of the existing water pollution models are mechanistic mathematical models. While they have made a significant contribution to understanding the transfer processes, they face the problem of validation because of their complexity, the user subjectivity in their parameterization, and the lack of empirical data for validation. In addition, the data describing water pollution with pesticides are, in most cases, very imbalanced. This is due to strict regulations for pesticide applications, which lead to only a few pollution events. In this study, we propose the use of data mining to build models for assessing the risk of water pollution by pesticides in field-drained outflow water. Unlike the mechanistic models, the models generated by data mining are based on easily obtainable empirical data, while the parameterization of the models is not influenced by the subjectivity of ecological modelers. We used empirical data from field trials at the La Jaillière experimental site in France and applied the random forests algorithm to build predictive models that predict "risky" and "not-risky" pesticide application events. To address the problems of the imbalanced classes in the data, cost-sensitive learning and different measures of predictive performance were used. Despite the high imbalance between risky and not-risky application events, we managed to build predictive models that make reliable predictions. The proposed modeling approach can be easily applied to other ecological modeling problems where we encounter empirical data with highly imbalanced classes.
Parameterization guidelines and considerations for hydrologic models
R. W. Malone; G. Yagow; C. Baffaut; M.W Gitau; Z. Qi; Devendra Amatya; P.B. Parajuli; J.V. Bonta; T.R. Green
2015-01-01
 Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...
Alternative Parameterizations for Cluster Editing
NASA Astrophysics Data System (ADS)
Komusiewicz, Christian; Uhlmann, Johannes
Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.
NASA Technical Reports Server (NTRS)
Molthan, A. L.; Haynes, J. A.; Jedlovec, G. L.; Lapenta, W. M.
2009-01-01
As operational numerical weather prediction is performed at increasingly finer spatial resolution, precipitation traditionally represented by sub-grid scale parameterization schemes is now being calculated explicitly through the use of single- or multi-moment, bulk water microphysics schemes. As computational resources grow, the real-time application of these schemes is becoming available to a broader audience, ranging from national meteorological centers to their component forecast offices. A need for improved quantitative precipitation forecasts has been highlighted by the United States Weather Research Program, which advised that gains in forecasting skill will draw upon improved simulations of clouds and cloud microphysical processes. Investments in space-borne remote sensing have produced the NASA A-Train of polar orbiting satellites, specially equipped to observe and catalog cloud properties. The NASA CloudSat instrument, a recent addition to the A-Train and the first 94 GHz radar system operated in space, provides a unique opportunity to compare observed cloud profiles to their modeled counterparts. Comparisons are available through the use of a radiative transfer model (QuickBeam), which simulates 94 GHz radar returns based on the microphysics of cloudy model profiles and the prescribed characteristics of their constituent hydrometeor classes. CloudSat observations of snowfall are presented for a case in the central United States, with comparisons made to precipitating clouds as simulated by the Weather Research and Forecasting Model and the Goddard single-moment microphysics scheme. An additional forecast cycle is performed with a temperature-based parameterization of the snow distribution slope parameter, with comparisons to CloudSat observations provided through the QuickBeam simulator.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Quality by design: scale-up of freeze-drying cycles in pharmaceutical industry.
Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Rastelli, Massimo
2013-09-01
This paper shows the application of mathematical modeling to scale-up a cycle developed with lab-scale equipment on two different production units. The above method is based on a simplified model of the process parameterized with experimentally determined heat and mass transfer coefficients. In this study, the overall heat transfer coefficient between product and shelf was determined by using the gravimetric procedure, while the dried product resistance to vapor flow was determined through the pressure rise test technique. Once model parameters were determined, the freeze-drying cycle of a parenteral product was developed via dynamic design space for a lab-scale unit. Then, mathematical modeling was used to scale-up the above cycle in the production equipment. In this way, appropriate values were determined for processing conditions, which allow the replication, in the industrial unit, of the product dynamics observed in the small scale freeze-dryer. This study also showed how inter-vial variability, as well as model parameter uncertainty, can be taken into account during scale-up calculations.
Simple liquid models with corrected dielectric constants
Fennell, Christopher J.; Li, Libo; Dill, Ken A.
2012-01-01
Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely
2014-08-14
Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ R {sup m×n} in terms of the minimal number of essential parameters (φ). Both square n = m and rectangular n < m situations are examined. Two separate kinds of parameterizations are considered, one in which the individual columns of Q are distinct, and the other in which only Span(Q) is significant. The latter is relevant to chemical applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations aremore » represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and (φ) parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.« less
Visualization in hydrological and atmospheric modeling and observation
NASA Astrophysics Data System (ADS)
Helbig, C.; Rink, K.; Kolditz, O.
2013-12-01
In recent years, visualization of geoscientific and climate data has become increasingly important due to challenges such as climate change, flood prediction or the development of water management schemes for arid and semi-arid regions. Models for simulations based on such data often have a large number of heterogeneous input data sets, ranging from remote sensing data and geometric information (such as GPS data) to sensor data from specific observations sites. Data integration using such information is not straightforward and a large number of potential problems may occur due to artifacts, inconsistencies between data sets or errors based on incorrectly calibrated or stained measurement devices. Algorithms to automatically detect various of such problems are often numerically expensive or difficult to parameterize. In contrast, combined visualization of various data sets is often a surprisingly efficient means for an expert to detect artifacts or inconsistencies as well as to discuss properties of the data. Therefore, the development of general visualization strategies for atmospheric or hydrological data will often support researchers during assessment and preprocessing of the data for model setup. When investigating specific phenomena, visualization is vital for assessing the progress of the ongoing simulation during runtime as well as evaluating the plausibility of the results. We propose a number of such strategies based on established visualization methods that - are applicable to a large range of different types of data sets, - are computationally inexpensive to allow application for time-dependent data - can be easily parameterized based on the specific focus of the research. Examples include the highlighting of certain aspects of complex data sets using, for example, an application-dependent parameterization of glyphs, iso-surfaces or streamlines. In addition, we employ basic rendering techniques allowing affine transformations, changes in opacity as well as variation of transfer functions. We found that similar strategies can be applied for hydrological and atmospheric data such as the use of streamlines for visualization of wind or fluid flow or iso-surfaces as indicators of groundwater recharge levels in the subsurface or levels of humidity in the atmosphere. We applied these strategies for a wide range of hydrological and climate applications such as groundwater flow, distribution of chemicals in water bodies, development of convection cells in the atmosphere or heat flux on the earth's surface. Results have been evaluated in discussions with experts from hydrogeology and meteorology.
Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël
2014-05-20
In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Hindcasting the Madden‐Julian Oscillation With a New Parameterization of Surface Heat Fluxes
Wang, Jingfeng; Lin, Wenshi
2017-01-01
Abstract The recently developed maximum entropy production (MEP) model, an alternative parameterization of surface heat fluxes, is incorporated into the Weather Research and Forecasting (WRF) model. A pair of WRF cloud‐resolving experiments (5 km grids) using the bulk transfer model (WRF default) and the MEP model of surface heat fluxes are performed to hindcast the October Madden‐Julian oscillation (MJO) event observed during the 2011 Dynamics of the MJO (DYNAMO) field campaign. The simulated surface latent and sensible heat fluxes in the MEP and bulk transfer model runs are in general consistent with in situ observations from two research vessels. Compared to the bulk transfer model, the convection envelope is strengthened in the MEP run and shows a more coherent propagation over the Maritime Continent. The simulated precipitable water in the MEP run is in closer agreement with the observations. Precipitation in the MEP run is enhanced during the active phase of the MJO with significantly reduced regional dry and wet biases. Large‐scale ocean evaporation is stronger in the MEP run leading to stronger boundary layer moistening to the east of the convection center, which facilitates the eastward propagation of the MJO. PMID:29399269
NASA Astrophysics Data System (ADS)
Loose, B.; Kelly, R. P.; Bigdeli, A.; Williams, W.; Krishfield, R.; Rutgers van der Loeff, M.; Moran, S. B.
2017-05-01
We present 34 profiles of radon-deficit from the ice-ocean boundary layer of the Beaufort Sea. Including these 34, there are presently 58 published radon-deficit estimates of air-sea gas transfer velocity (k) in the Arctic Ocean; 52 of these estimates were derived from water covered by 10% sea ice or more. The average value of k collected since 2011 is 4.0 ± 1.2 m d-1. This exceeds the quadratic wind speed prediction of weighted kws = 2.85 m d-1 with mean-weighted wind speed of 6.4 m s-1. We show how ice cover changes the mixed-layer radon budget, and yields an "effective gas transfer velocity." We use these 58 estimates to statistically evaluate the suitability of a wind speed parameterization for k, when the ocean surface is ice covered. Whereas the six profiles taken from the open ocean indicate a statistically good fit to wind speed parameterizations, the same parameterizations could not reproduce k from the sea ice zone. We conclude that techniques for estimating k in the open ocean cannot be similarly applied to determine k in the presence of sea ice. The magnitude of k through gaps in the ice may reach high values as ice cover increases, possibly as a result of focused turbulence dissipation at openings in the free surface. These 58 profiles are presently the most complete set of estimates of k across seasons and variable ice cover; as dissolved tracer budgets they reflect air-sea gas exchange with no impact from air-ice gas exchange.
Parameterization guidelines and considerations for hydrologic models
USDA-ARS?s Scientific Manuscript database
Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...
A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.
2017-12-01
A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.
Toward computational models of magma genesis and geochemical transport in subduction zones
NASA Astrophysics Data System (ADS)
Katz, R.; Spiegelman, M.
2003-04-01
The chemistry of material erupted from subduction-related volcanoes records important information about the processes that lead to its formation at depth in the Earth. Self-consistent numerical simulations provide a useful tool for interpreting this data as they can explore the non-linear feedbacks between processes that control the generation and transport of magma. A model capable of addressing such issues should include three critical components: (1) a variable viscosity solid flow solver with smooth and accurate pressure and velocity fields, (2) a parameterization of mass transfer reactions between the solid and fluid phases and (3) a consistent fluid flow and reactive transport code. We report on progress on each of these parts. To handle variable-viscosity solid-flow in the mantle wedge, we are adapting a Patankar-based FAS multigrid scheme developed by Albers (2000, J. Comp. Phys.). The pressure field in this scheme is the solution to an elliptic equation on a staggered grid. Thus we expect computed pressure fields to have smooth gradient fields suitable for porous flow calculations, unlike those of commonly used penalty-method schemes. Use of a temperature and strain-rate dependent mantle rheology has been shown to have important consequences for the pattern of flow and the temperature structure in the wedge. For computing thermal structure we present a novel scheme that is a hybrid of Crank-Nicholson (CN) and Semi-Lagrangian (SL) methods. We have tested the SLCN scheme on advection across a broad range of Peclet numbers and show the results. This scheme is also useful for low-diffusivity chemical transport. We also describe our parameterization of hydrous mantle melting [Katz et. al., G3, 2002 in review]. This parameterization is designed to capture the melting behavior of peridotite--water systems over parameter ranges relevant to subduction. The parameterization incorporates data and intuition gained from laboratory experiments and thermodynamic calculations yet it remains flexible and computationally efficient. Given accurate solid-flow fields, a parameterization of hydrous melting and a method for calculating thermal structure (enforcing energy conservation), the final step is to integrate these components into a consistent framework for reactive-flow and chemical transport in deformable porous media. We present preliminary results for reactive flow in 2-D static and upwelling columns and discuss possible mechanical and chemical consequences of open system reactive melting with application to arcs.
A simple method to predict body temperature of small reptiles from environmental temperature.
Vickers, Mathew; Schwarzkopf, Lin
2016-05-01
To study behavioral thermoregulation, it is useful to use thermal sensors and physical models to collect environmental temperatures that are used to predict organism body temperature. Many techniques involve expensive or numerous types of sensors (cast copper models, or temperature, humidity, radiation, and wind speed sensors) to collect the microhabitat data necessary to predict body temperatures. Expense and diversity of requisite sensors can limit sampling resolution and accessibility of these methods. We compare body temperature predictions of small lizards from iButtons, DS18B20 sensors, and simple copper models, in both laboratory and natural conditions. Our aim was to develop an inexpensive yet accurate method for body temperature prediction. Either method was applicable given appropriate parameterization of the heat transfer equation used. The simplest and cheapest method was DS18B20 sensors attached to a small recording computer. There was little if any deficit in precision or accuracy compared to other published methods. We show how the heat transfer equation can be parameterized, and it can also be used to predict body temperature from historically collected data, allowing strong comparisons between current and previous environmental temperatures using the most modern techniques. Our simple method uses very cheap sensors and loggers to extensively sample habitat temperature, improving our understanding of microhabitat structure and thermal variability with respect to small ectotherms. While our method was quite precise, we feel any potential loss in accuracy is offset by the increase in sample resolution, important as it is increasingly apparent that, particularly for small ectotherms, habitat thermal heterogeneity is the strongest influence on transient body temperature.
Modelling storm development and the impact when introducing waves, sea spray and heat fluxes
NASA Astrophysics Data System (ADS)
Wu, Lichuan; Rutgersson, Anna; Sahlée, Erik
2015-04-01
In high wind speed conditions, sea spray generated due to intensity breaking waves have big influence on the wind stress and heat fluxes. Measurements show that drag coefficient will decrease in high wind speed. Sea spray generation function (SSGF), an important term of wind stress parameterization in high wind speed, usually treated as a function of wind speed/friction velocity. In this study, we introduce a wave state depended SSGG and wave age depended Charnock number into a high wind speed wind stress parameterization (Kudryavtsev et al., 2011; 2012). The proposed wind stress parameterization and sea spray heat fluxes parameterization from Andreas et al., (2014) were applied to an atmosphere-wave coupled model to test on four storm cases. Compared with measurements from the FINO1 platform in the North Sea, the new wind stress parameterization can reduce the forecast errors of wind in high wind speed range, but not in low wind speed. Only sea spray impacted on wind stress, it will intensify the storms (minimum sea level pressure and maximum wind speed) and lower the air temperature (increase the errors). Only the sea spray impacted on the heat fluxes, it can improve the model performance on storm tracks and the air temperature, but not change much in the storm intensity. If both of sea spray impacted on the wind stress and heat fluxes are taken into account, it has the best performance in all the experiment for minimum sea level pressure and maximum wind speed and air temperature. Andreas, E. L., Mahrt, L., and Vickers, D. (2014). An improved bulk air-sea surface flux algorithm, including spray-mediated transfer. Quarterly Journal of the Royal Meteorological Society. Kudryavtsev, V. and Makin, V. (2011). Impact of ocean spray on the dynamics of the marine atmospheric boundary layer. Boundary-layer meteorology, 140(3):383-410. Kudryavtsev, V., Makin, V., and S, Z. (2012). On the sea-surface drag and heat/mass transfer at strong winds. Technical report, Royal Netherlands Meteorological Institute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs
NASA Astrophysics Data System (ADS)
Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Pincus, R.
2016-12-01
A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC is now incorporated into a version of GFS, as well as into the next generation of the NCEP global model - NOAA Environmental Modeling System (NEMS). Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these variables. Radiative transfer parameterization uses cloudiness computed by SHOC.Outstanding problems include high level tropical cloud fraction being too high in SHOC runs, possibly related to the interaction of SHOC with condensate detrained from deep convection.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate tests with prescribed SSTs. Depending on the results, the model will be tuned or parameterizations modified. Next, SHOC will be implemented in the NCEP CFS, and tuned and evaluated for climate applications - seasonal prediction and long coupled climate runs. Impact of new physics on ENSO, MJO, ISO, monsoon variability, etc will be examined.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
NASA Technical Reports Server (NTRS)
Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.
2017-01-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.
Evaluation of surface layer flux parameterizations using in-situ observations
NASA Astrophysics Data System (ADS)
Katz, Jeremy; Zhu, Ping
2017-09-01
Appropriate calculation of surface turbulent fluxes between the atmosphere and the underlying ocean/land surface is one of the major challenges in geosciences. In practice, the surface turbulent fluxes are estimated from the mean surface meteorological variables based on the bulk transfer model combined with the Monnin-Obukhov Similarity (MOS) theory. Few studies have been done to examine the extent to which such a flux parameterization can be applied to different weather and surface conditions. A novel validation method is developed in this study to evaluate the surface flux parameterization using in-situ observations collected at a station off the coast of Gulf of Mexico. The main findings are: (a) the theoretical prediction that uses MOS theory does not match well with those directly computed from the observations. (b) The largest spread in exchange coefficients is shown in strong stable conditions with calm winds. (c) Large turbulent eddies, which depend strongly on the mean flow pattern and surface conditions, tend to break the constant flux assumption in the surface layer.
Lightning Scaling Laws Revisited
NASA Technical Reports Server (NTRS)
Boccippio, D. J.; Arnold, James E. (Technical Monitor)
2000-01-01
Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.
Sniffle: a step forward to measure in situ CO 2 fluxes with the floating chamber technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ribas-Ribas, Mariana; Kilcher, Levi F.; Wurl, Oliver
Understanding how the ocean absorbs anthropogenic CO 2 is critical for predicting climate change. We designed Sniffle, a new autonomous drifting buoy with a floating chamber, to measure gas transfer velocities and air-sea CO 2 fluxes with high spatiotemporal resolution. Currently, insufficient in situ data exist to verify gas transfer parameterizations at low wind speeds (<4 m s -1), which leads to underestimation of gas transfer velocities and, therefore, of air-sea CO 2 fluxes. The Sniffle is equipped with a sensor to consecutively measure aqueous and atmospheric pCO 2 and to monitor increases or decreases of CO 2 inside themore » chamber. During autonomous operation, a complete cycle lasts 40 minutes, with a new cycle initiated after flushing the chamber. The Sniffle can be deployed for up to 15 hours at wind speeds up to 10 m s -1. Floating chambers often overestimate fluxes because they create additional turbulence at the water surface. We correct fluxes by measuring turbulence with two acoustic Doppler velocimeters, one positioned directly under the floating chamber and the other positioned sideways, to compare artificial disturbance caused by the chamber and natural turbulence. The first results of deployment in the North Sea during the summer of 2016 demonstrate that the new drifting buoy is a useful tool that can improve our understanding of gas transfer velocity with in situ measurements. At low and moderate wind speeds and different conditions, the results obtained indicate that the observed tidal basin was acting as a source of atmospheric CO 2. Wind speed and turbulence alone could not fully explain the variance in gas transfer velocity. We suggest therefore, that other factors like surfactants, rain or tidal current will have an impact on gas transfer parameterizations.« less
Sniffle: a step forward to measure in situ CO 2 fluxes with the floating chamber technique
Ribas-Ribas, Mariana; Kilcher, Levi F.; Wurl, Oliver
2018-01-09
Understanding how the ocean absorbs anthropogenic CO 2 is critical for predicting climate change. We designed Sniffle, a new autonomous drifting buoy with a floating chamber, to measure gas transfer velocities and air-sea CO 2 fluxes with high spatiotemporal resolution. Currently, insufficient in situ data exist to verify gas transfer parameterizations at low wind speeds (<4 m s -1), which leads to underestimation of gas transfer velocities and, therefore, of air-sea CO 2 fluxes. The Sniffle is equipped with a sensor to consecutively measure aqueous and atmospheric pCO 2 and to monitor increases or decreases of CO 2 inside themore » chamber. During autonomous operation, a complete cycle lasts 40 minutes, with a new cycle initiated after flushing the chamber. The Sniffle can be deployed for up to 15 hours at wind speeds up to 10 m s -1. Floating chambers often overestimate fluxes because they create additional turbulence at the water surface. We correct fluxes by measuring turbulence with two acoustic Doppler velocimeters, one positioned directly under the floating chamber and the other positioned sideways, to compare artificial disturbance caused by the chamber and natural turbulence. The first results of deployment in the North Sea during the summer of 2016 demonstrate that the new drifting buoy is a useful tool that can improve our understanding of gas transfer velocity with in situ measurements. At low and moderate wind speeds and different conditions, the results obtained indicate that the observed tidal basin was acting as a source of atmospheric CO 2. Wind speed and turbulence alone could not fully explain the variance in gas transfer velocity. We suggest therefore, that other factors like surfactants, rain or tidal current will have an impact on gas transfer parameterizations.« less
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
Surface shear stress dependence of gas transfer velocity parameterizations using DNS
NASA Astrophysics Data System (ADS)
Fredriksson, S. T.; Arneborg, L.; Nilsson, H.; Handler, R. A.
2016-10-01
Air-water gas-exchange is studied in direct numerical simulations (DNS) of free-surface flows driven by natural convection and weak winds. The wind is modeled as a constant surface-shear-stress and the gas-transfer is modeled via a passive scalar. The simulations are characterized via a Richardson number Ri=Bν/u*4 where B, ν, and u* are the buoyancy flux, kinematic viscosity, and friction velocity respectively. The simulations comprise 0
Comparison of different objective functions for parameterization of simple respiration models
M.T. van Wijk; B. van Putten; D.Y. Hollinger; A.D. Richardson
2008-01-01
The eddy covariance measurements of carbon dioxide fluxes collected around the world offer a rich source for detailed data analysis. Simple, aggregated models are attractive tools for gap filling, budget calculation, and upscaling in space and time. Key in the application of these models is their parameterization and a robust estimate of the uncertainty and reliability...
NASA Astrophysics Data System (ADS)
Sol Galligani, Victoria; Wang, Die; Alvarez Imaz, Milagros; Salio, Paola; Prigent, Catherine
2017-10-01
In the present study, three meteorological events of extreme deep moist convection, characteristic of south-eastern South America, are considered to conduct a systematic evaluation of the microphysical parameterizations available in the Weather Research and Forecasting (WRF) model by undertaking a direct comparison between satellite-based simulated and observed microwave radiances. A research radiative transfer model, the Atmospheric Radiative Transfer Simulator (ARTS), is coupled with the WRF model under three different microphysical parameterizations (WSM6, WDM6 and Thompson schemes). Microwave radiometry has shown a promising ability in the characterization of frozen hydrometeors. At high microwave frequencies, however, frozen hydrometeors significantly scatter radiation, and the relationship between radiation and hydrometeor populations becomes very complex. The main difficulty in microwave remote sensing of frozen hydrometeor characterization is correctly characterizing this scattering signal due to the complex and variable nature of the size, composition and shape of frozen hydrometeors. The present study further aims at improving the understanding of frozen hydrometeor optical properties characteristic of deep moist convection events in south-eastern South America. In the present study, bulk optical properties are computed by integrating the single-scattering properties of the Liu(2008) discrete dipole approximation (DDA) single-scattering database across the particle size distributions parameterized by the different WRF schemes in a consistent manner, introducing the equal mass approach. The equal mass approach consists of describing the optical properties of the WRF snow and graupel hydrometeors with the optical properties of habits in the DDA database whose dimensions might be different (D
Application of the Tauc-Lorentz formulation to the interband absorption of optical coating materials
NASA Astrophysics Data System (ADS)
von Blanckenhagen, Bernhard; Tonova, Diana; Ullmann, Jens
2002-06-01
Recent progress in ellipsometry instrumentation permits precise measurement and characterization of optical coating materials in the deep-UV wavelength range. Dielectric coating materials exhibit their first electronic interband transition in this spectral range. The Tauc-Lorentz model is a powerful tool with which to parameterize interband absorption above the band edge. The application of this model for the parameterization of the optical absorption of TiO2, Ta2O5, HfO2, Al2O3, and LaF3 thin-film materials is described.
Lomize, Andrei L; Pogozheva, Irina D; Mosberg, Henry I
2011-04-25
A new implicit solvation model was developed for calculating free energies of transfer of molecules from water to any solvent with defined bulk properties. The transfer energy was calculated as a sum of the first solvation shell energy and the long-range electrostatic contribution. The first term was proportional to solvent accessible surface area and solvation parameters (σ(i)) for different atom types. The electrostatic term was computed as a product of group dipole moments and dipolar solvation parameter (η) for neutral molecules or using a modified Born equation for ions. The regression coefficients in linear dependencies of solvation parameters σ(i) and η on dielectric constant, solvatochromic polarizability parameter π*, and hydrogen-bonding donor and acceptor capacities of solvents were optimized using 1269 experimental transfer energies from 19 organic solvents to water. The root-mean-square errors for neutral compounds and ions were 0.82 and 1.61 kcal/mol, respectively. Quantification of energy components demonstrates the dominant roles of hydrophobic effect for nonpolar atoms and of hydrogen-bonding for polar atoms. The estimated first solvation shell energy outweighs the long-range electrostatics for most compounds including ions. The simplicity and computational efficiency of the model allows its application for modeling of macromolecules in anisotropic environments, such as biological membranes.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-06
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Description and availability of the SMARTS spectral model for photovoltaic applications
NASA Astrophysics Data System (ADS)
Myers, Daryl R.; Gueymard, Christian A.
2004-11-01
Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.
Simulation of charge transfer and orbital rehybridization in molecular and condensed matter systems
NASA Astrophysics Data System (ADS)
Nistor, Razvan A.
The mixing and shifting of electronic orbitals in molecules, or between atoms in bulk systems, is crucially important to the overall structure and physical properties of materials. Understanding and accurately modeling these orbital interactions is of both scientific and industrial relevance. Electronic orbitals can be perturbed in several ways. Doping, adding or removing electrons from systems, can change the bond-order and the physical properties of certain materials. Orbital rehybridization, driven by either thermal or pressure excitation, alters the short-range structure of materials and changes their long-range transport properties. Macroscopically, during bond formation, the shifting of electronic orbitals can be interpreted as a charge transfer phenomenon, as electron density may pile up around, and hence, alter the effective charge of, a given atom in the changing chemical environment. Several levels of theory exist to elucidate the mechanisms behind these orbital interactions. Electronic structure calculations solve the time-independent Schrodinger equation to high chemical accuracy, but are computationally expensive and limited to small system sizes and simulation times. Less fundamental atomistic calculations use simpler parameterized functional expressions called force-fields to model atomic interactions. Atomistic simulations can describe systems and time-scales larger and longer than electronic-structure methods, but at the cost of chemical accuracy. In this thesis, both first-principles and phenomenological methods are addressed in the study of several encompassing problems dealing with charge transfer and orbital rehybridization. Firstly, a new charge-equilibration method is developed that improves upon existing models to allow next-generation force-fields to describe the electrostatics of changing chemical environments. Secondly, electronic structure calculations are used to investigate the doping dependent energy landscapes of several high-temperature superconducting materials in order to parameterize the apparently large nonlinear electron-phonon coupling. Thirdly, ab initio simulations are used to investigate the role of pressure-driven structural re-organization in the crystalline-to-amorphous (or, metallic-to-insulating) transition of a common binary phase-change material composed of Ge and Sb. Practical applications of each topic will be discussed. Keywords. Charge-equilibration methods, molecular dynamics, electronic structure calculations, ab initio simulations, high-temperature superconductors, phase-change materials.
NASA Astrophysics Data System (ADS)
Johnson, M. T.
2010-10-01
The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.
Remote sensing of oligotrophic waters: model divergence at low chlorophyll concentrations.
Mehrtens, Hela; Martin, Thomas
2002-11-20
The performance of the OC2 Sea-viewing Wide Field-of-view Sensor (SeaWiFS) algorithm based on 490- and 555-nm water-leaving radiances at low chlorophyll contents is compared with those of semianalytical models and a Monte Carlo radiative transfer model. We introduce our model, which uses two particle phase functions and scattering coefficient parameterizations to achieve a backscattering ratio that varies with chlorophyll concentration. We discuss the various parameterizations and compare them with existent measurements. The SeaWiFS algorithm could be confirmed within an accuracy of 35% over a chlorophyll range from 0.1 to 1 mg m(-3), whereas for lower chlorophyll concentrations we found a significant overestimation of the OC2 algorithm.
NASA Astrophysics Data System (ADS)
Sommer, Philipp; Kaplan, Jed
2016-04-01
Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.
Summary of Cumulus Parameterization Workshop
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh
2002-01-01
A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, W. -L.; Gu, Y.; Liou, K. N.
2015-05-19
We investigate 3-D mountain effects on solar flux distributions and their impact on surface hydrology over the western United States, specifically the Rocky Mountains and the Sierra Nevada, using the global CCSM4 (Community Climate System Model version 4; Community Atmosphere Model/Community Land Model – CAM4/CLM4) with a 0.23° × 0.31° resolution for simulations over 6 years. In a 3-D radiative transfer parameterization, we have updated surface topography data from a resolution of 1 km to 90 m to improve parameterization accuracy. In addition, we have also modified the upward-flux deviation (3-D–PP (plane-parallel)) adjustment to ensure that the energy balance atmore » the surface is conserved in global climate simulations based on 3-D radiation parameterization. We show that deviations in the net surface fluxes are not only affected by 3-D mountains but also influenced by feedbacks of cloud and snow in association with the long-term simulations. Deviations in sensible heat and surface temperature generally follow the patterns of net surface solar flux. The monthly snow water equivalent (SWE) deviations show an increase in lower elevations due to reduced snowmelt, leading to a reduction in cumulative runoff. Over higher-elevation areas, negative SWE deviations are found because of increased solar radiation available at the surface. Simulated precipitation increases for lower elevations, while it decreases for higher elevations, with a minimum in April. Liquid runoff significantly decreases at higher elevations after April due to reduced SWE and precipitation.« less
NASA Astrophysics Data System (ADS)
Riddick, Stuart; Ward, Daniel; Hess, Peter; Mahowald, Natalie; Massad, Raia; Holland, Elisabeth
2016-06-01
Nitrogen applied to the surface of the land for agricultural purposes represents a significant source of reactive nitrogen (Nr) that can be emitted as a gaseous Nr species, be denitrified to atmospheric nitrogen (N2), run off during rain events or form plant-useable nitrogen in the soil. To investigate the magnitude, temporal variability and spatial heterogeneity of nitrogen pathways on a global scale from sources of animal manure and synthetic fertilizer, we developed a mechanistic parameterization of these pathways within a global terrestrial land model, the Community Land Model (CLM). In this first model version the parameterization emphasizes an explicit climate-dependent approach while using highly simplified representations of agricultural practices, including manure management and fertilizer application. The climate-dependent approach explicitly simulates the relationship between meteorological variables and biogeochemical processes to calculate the volatilization of ammonia (NH3), nitrification and runoff of Nr following manure or synthetic fertilizer application. For the year 2000, approximately 125 Tg N yr-1 is applied as manure and 62 Tg N yr-1 is applied as synthetic fertilizer. We estimate the resulting global NH3 emissions are 21 Tg N yr-1 from manure (17 % of manure production) and 12 Tg N yr-1 from fertilizer (19 % of fertilizer application); reactive nitrogen runoff during rain events is calculated as 11 Tg N yr-1 from manure and 5 Tg N yr-1 from fertilizer. The remaining nitrogen from manure (93 Tg N yr-1) and synthetic fertilizer (45 Tg N yr-1) is captured by the canopy or transferred to the soil nitrogen pools. The parameterization was implemented in the CLM from 1850 to 2000 using a transient simulation which predicted that, even though absolute values of all nitrogen pathways are increasing with increased manure and synthetic fertilizer application, partitioning of nitrogen to NH3 emissions from manure is increasing on a percentage basis, from 14 % of nitrogen applied in 1850 (3 Tg NH3 yr-1) to 17 % of nitrogen applied in 2000 (21 Tg NH3 yr-1). Under current manure and synthetic fertilizer application rates we find a global sensitivity of an additional 1 Tg NH3 (approximately 3 % of manure and fertilizer) emitted per year per °C of warming. While the model confirms earlier estimates of nitrogen fluxes made in a range of studies, its key purpose is to provide a theoretical framework that can be employed within a biogeochemical model, that can explicitly respond to climate and that can evolve and improve with further observation.
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V.A.; /Fermilab; Bogacz, S.A.
Presently, there are two most frequently used parameterizations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is described by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. Considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev's vertex-to-plane adapter.« less
Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane
This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less
NASA Astrophysics Data System (ADS)
Qin, Zilong; Chen, Mingli; Zhu, Baoyou; Du, Ya-ping
2017-01-01
An improved ray theory and transfer matrix method-based model for a lightning electromagnetic pulse (LEMP) propagating in Earth-ionosphere waveguide (EIWG) is proposed and tested. The model involves the presentation of a lightning source, parameterization of the lower ionosphere, derivation of a transfer function representing all effects of EIWG on LEMP sky wave, and determination of attenuation mode of the LEMP ground wave. The lightning source is simplified as an electric point dipole standing on Earth surface with finite conductance. The transfer function for the sky wave is derived based on ray theory and transfer matrix method. The attenuation mode for the ground wave is solved from Fock's diffraction equations. The model is then applied to several lightning sferics observed in central China during day and night times within 1000 km. The results show that the model can precisely predict the time domain sky wave for all these observed lightning sferics. Both simulations and observations show that the lightning sferics in nighttime has a more complicated waveform than in daytime. Particularly, when a LEMP propagates from east to west (Φ = 270°) and in nighttime, its sky wave tends to be a double-peak waveform (dispersed sky wave) rather than a single peak one. Such a dispersed sky wave in nighttime may be attributed to the magneto-ionic splitting phenomenon in the lower ionosphere. The model provides us an efficient way for retrieving the electron density profile of the lower ionosphere and hence to monitor its spatial and temporal variations via lightning sferics.
NASA Astrophysics Data System (ADS)
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
NASA Astrophysics Data System (ADS)
Ma, Leiming
2015-04-01
Planetary Boundary Layer (PBL) plays an important role in transferring the energy and moisture from ocean to tropical cyclone (TC). Thus, the accuracy of PBL parameterization determines the performance of numerical model on TC prediction to a large extent. Among various components of PBL parameterization, the definition on the height of PBL is the first should be concerned, which determines the vertical scale of PBL and the associated processes of turbulence in different scales. However, up to now, there is lacked consensus on how to define the height of PBL in the TC research community. The PBL heights represented by current numerical models usually exhibits significant difference with TC observation (e.g., Zhang et al., 2011; Storm et al., 2008), leading to the rapid growth of error in TC prediction. In an effort to narrow the gap between PBL parameterization and reality, this study presents a new parameterization scheme for the definition of PBL height. Instead of using traditional definition for PBL height with Richardson number, which has been verified not appropriate for the strongly sheared structure of TC PBL in recent observation studies, the new scheme employs a dynamical definition based on the conception of helicity. In this sense the spiral structures associated with inflow layer and rolls are expected to be represented in PBL parameterization. By defining the PBL height at each grid point, the new scheme also avoids to assume the symmetric inflow layer that is usually implemented in observational studies. The new scheme is applied to the Yonsei University (YSU) scheme in the Weather Research and Forecasting (WRF) model of US National Center for Atmospheric Research (NCAR) and verified with numerical experiments on TC Morakot (2009), which brought torrential rainfall and disaster to Taiwan and China mainland during landfall. The Morakot case is selected in this study to examine the performance of the new scheme in representing various structures of PBL over land and ocean. The results of simulations show that, in addition to enhancing the PBL height in the situation of intensive convection, the new scheme also significantly reduces the PBL height and 2m-temperature over land during the night time, a well-known problem for YSU scheme according to previous studies. The activity of PBL processes are modulated due to the improved PBL height, which ultimately leads to the improvement of prediction on TC Morakot. Key Words: PBL; Parameterization; Numerical Prediction; Tropical Cyclone Acknowledgements. This study was jointly supported by the Chinese National 973 Project (No. 2013CB430300, and No. 2009CB421500) and grant from the National Natural Science Foundation (No. 41475059). References Zhang, J. A., R. F. Rogers, D. S. Nolan, and F. D. Marks Jr., 2011: On the characteristic height scales of the hurricane boundary layer, Mon. Weather Rev., 139, 2523-2535. Storm B., J. Dudhia, S. Basu, et al., 2008: Evaluation of the Weather Research and Forecasting Model on forecasting Low-level Jets: Implications for Wind Energy. Wind Energ., DOI: 10.1002/we.
Atmospheric solar heating rate in the water vapor bands
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah
1986-01-01
The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.
Parameterization of Mixed Layer and Deep-Ocean Mesoscales Including Nonlinearity
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Cheng, Y.; Dubovikov, M. S.; Howard, A. M.; Leboissetier, A.
2018-01-01
In 2011, Chelton et al. carried out a comprehensive census of mesoscales using altimetry data and reached the following conclusions: "essentially all of the observed mesoscale features are nonlinear" and "mesoscales do not move with the mean velocity but with their own drift velocity," which is "the most germane of all the nonlinear metrics."� Accounting for these results in a mesoscale parameterization presents conceptual and practical challenges since linear analysis is no longer usable and one needs a model of nonlinearity. A mesoscale parameterization is presented that has the following features: 1) it is based on the solutions of the nonlinear mesoscale dynamical equations, 2) it describes arbitrary tracers, 3) it includes adiabatic (A) and diabatic (D) regimes, 4) the eddy-induced velocity is the sum of a Gent and McWilliams (GM) term plus a new term representing the difference between drift and mean velocities, 5) the new term lowers the transfer of mean potential energy to mesoscales, 6) the isopycnal slopes are not as flat as in the GM case, 7) deep-ocean stratification is enhanced compared to previous parameterizations where being more weakly stratified allowed a large heat uptake that is not observed, 8) the strength of the Deacon cell is reduced. The numerical results are from a stand-alone ocean code with Coordinated Ocean-Ice Reference Experiment I (CORE-I) normal-year forcing.
Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere
NASA Astrophysics Data System (ADS)
Becker, E.
2016-12-01
At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
NASA Astrophysics Data System (ADS)
He, C.; Liou, K. N.; Takano, Y.; Yang, P.; Li, Q.; Chen, F.
2017-12-01
A set of parameterizations is developed for spectral single-scattering properties of clean and black carbon (BC)-contaminated snow based on geometric-optic surface-wave (GOS) computations, which explicitly resolves BC-snow internal mixing and various snow grain shapes. GOS calculations show that, compared with nonspherical grains, volume-equivalent snow spheres show up to 20% larger asymmetry factors and hence stronger forward scattering, particularly at wavelengths <1 mm. In contrast, snow grain sizes have a rather small impact on the asymmetry factor at wavelengths <1 mm, whereas size effects are important at longer wavelengths. The snow asymmetry factor is parameterized as a function of effective size, aspect ratio, and shape factor, and shows excellent agreement with GOS calculations. According to GOS calculations, the single-scattering coalbedo of pure snow is predominantly affected by grain sizes, rather than grain shapes, with higher values for larger grains. The snow single-scattering coalbedo is parameterized in terms of the effective size that combines shape and size effects, with an accuracy of >99%. Based on GOS calculations, BC-snow internal mixing enhances the snow single-scattering coalbedo at wavelengths <1 mm, but it does not alter the snow asymmetry factor. The BC-induced enhancement ratio of snow single-scattering coalbedo, independent of snow grain size and shape, is parameterized as a function of BC concentration with an accuracy of >99%. Overall, in addition to snow grain size, both BC-snow internal mixing and snow grain shape play critical roles in quantifying BC effects on snow optical properties. The present parameterizations can be conveniently applied to snow, land surface, and climate models including snowpack radiative transfer processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Samuel S. P.
2013-09-01
The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been an interdisciplinary collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen). The motivation and long-term goal underlying this work is the utilization of stochastic radiative transfer theory (Lane-Veron and Somerville, 2004; Lane et al., 2002) to develop a new class of parametric representations of cloud-radiation interactions and closely related processes for atmospheric models. The theoretical advantage of the stochastic approach is that it can accurately calculate the radiative heating rates through a broken cloud layer without requiring an exact description of the cloud geometry.« less
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Gas transfer under high wind and its dependence on wave breaking and sea state
NASA Astrophysics Data System (ADS)
Brumer, Sophia; Zappa, Christopher; Fairall, Christopher; Blomquist, Byron; Brooks, Ian; Yang, Mingxi
2016-04-01
Quantifying greenhouse gas fluxes on regional and global scales relies on parameterizations of the gas transfer velocity K. To first order, K is dictated by wind speed (U) and is typically parameterized as a non-linear functions of U. There is however a large spread in K predicted by the traditional parameterizations at high wind speed. This is because a large variety of environmental forcing and processes (Wind, Currents, Rain, Waves, Breaking, Surfactants, Fetch) actually influence K and wind speed alone cannot capture the variability of air-water gas exchange. At high wind speed especially, breaking waves become a key factor to take into account when estimating gas fluxes. The High Wind Gas exchange Study (HiWinGS) presents the unique opportunity to gain new insights on this poorly understood aspects of air-sea interaction under high winds. The HiWinGS cruise took place in the North Atlantic during October and November 2013. Wind speeds exceeded 15 m s-1 25% of the time, including 48 hrs with U10 > 20 m s-1. Continuous measurements of turbulent fluxes of heat, momentum, and gas (CO2, DMS, acetone and methanol) were taken from the bow of the R/V Knorr. The wave field was sampled by a wave rider buoy and breaking events were tracked in visible imagery was acquired from the port and starboard side of the flying bridge during daylight hours at 20Hz. Taking advantage of the range of physical forcing and wave conditions sampled during HiWinGS, we test existing parameterizations and explore ways of better constraining K based on whitecap coverage, sea state and breaking statistics contrasting pure windseas to swell dominated periods. We distinguish between windseas and swell based on a separation algorithm applied to directional wave spectra for mixed seas, system alignment is considered when interpreting results. The four gases sampled during HiWinGS ranged from being mostly waterside controlled to almost entirely airside controlled. While bubble-mediated transfer appears to be small for moderately soluble gases like DMS, the importance of wave breaking turbulence transport has yet to be determined for all gases regardless of their solubility. This will be addressed by correlating measured K to estimates of active whitecap fraction (WA) and turbulent kinetic energy dissipation rate (ɛ). WA and ɛ are estimated from moments of the breaking crest length distribution derived from the imagery, focusing on young seas, when it is likely that large-scale breaking waves (i.e., whitecapping) will dominate the ɛ.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
Domain-averaged snow depth over complex terrain from flat field measurements
NASA Astrophysics Data System (ADS)
Helbig, Nora; van Herwijnen, Alec
2017-04-01
Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.
A ray tracing model of gravity wave propagation and breakdown in the middle atmosphere
NASA Technical Reports Server (NTRS)
Schoeberl, M. R.
1985-01-01
Gravity wave ray tracing and wave packet theory is used to parameterize wave breaking in the mesosphere. Rays are tracked by solving the group velocity equations, and the interaction with the basic state is determined by considering the evolution of the packet wave action density. The ray tracing approach has a number of advantages over the steady state parameterization as the effects of gravity wave focussing and refraction, local dissipation, and wave response to rapid changes in the mean flow are more realistically considered; however, if steady state conditions prevail, the method gives identical results. The ray tracing algorithm is tested using both interactive and noninteractive models of the basic state. In the interactive model, gravity wave interaction with the polar night jet on a beta-plane is considered. The algorithm produces realistic polar night jet closure for weak topographic forcing of gravity waves. Planetary scale waves forced by local transfer of wave action into the basic flow in turn transfer their wave action into the zonal mean flow. Highly refracted rays are also found not to contribute greatly to the climatology of the mesosphere, as their wave action is severely reduced by dissipation during their lateral travel.
NASA Astrophysics Data System (ADS)
Monicke, A.; Katajisto, H.; Leroy, M.; Petermann, N.; Kere, P.; Perillo, M.
2012-07-01
For many years, layered composites have proven essential for the successful design of high-performance space structures, such as launchers or satellites. A generic cylindrical composite structure for a launcher application was optimized with respect to objectives and constraints typical for space applications. The studies included the structural stability, laminate load response and failure analyses. Several types of cylinders (with and without stiffeners) were considered and optimized using different lay-up parameterizations. Results for the best designs are presented and discussed. The simulation tools, ESAComp [1] and modeFRONTIER [2], employed in the optimization loop are elucidated and their value for the optimization process is explained.
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
Low-Thrust Transfers from Distant Retrograde Orbits to L2 Halo Orbits in the Earth-Moon System
NASA Technical Reports Server (NTRS)
Parrish, Nathan L.; Parker, Jeffrey S.; Hughes, Steven P.; Heiligers, Jeannette
2016-01-01
This paper presents a study of transfers between distant retrograde orbits (DROs) and L2 halo orbits in the Earth-Moon system that could be flown by a spacecraft with solar electric propulsion (SEP). Two collocation-based optimal control methods are used to optimize these highly-nonlinear transfers: Legendre pseudospectral and Hermite-Simpson. Transfers between DROs and halo orbits using low-thrust propulsion have not been studied previously. This paper offers a study of several families of trajectories, parameterized by the number of orbital revolutions in a synodic frame. Even with a poor initial guess, a method is described to reliably generate families of solutions. The circular restricted 3-body problem (CRTBP) is used throughout the paper so that the results are autonomous and simpler to understand.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.; Bertacchi, S.
2014-06-01
Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson-Sellers, A.
Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less
NASA Astrophysics Data System (ADS)
Vihma, T.; Pirazzini, R.; Fer, I.; Renfrew, I. A.; Sedlar, J.; Tjernström, M.; Lüpkes, C.; Nygård, T.; Notz, D.; Weiss, J.; Marsan, D.; Cheng, B.; Birnbaum, G.; Gerland, S.; Chechin, D.; Gascard, J. C.
2014-09-01
The Arctic climate system includes numerous highly interactive small-scale physical processes in the atmosphere, sea ice, and ocean. During and since the International Polar Year 2007-2009, significant advances have been made in understanding these processes. Here, these recent advances are reviewed, synthesized, and discussed. In atmospheric physics, the primary advances have been in cloud physics, radiative transfer, mesoscale cyclones, coastal, and fjordic processes as well as in boundary layer processes and surface fluxes. In sea ice and its snow cover, advances have been made in understanding of the surface albedo and its relationships with snow properties, the internal structure of sea ice, the heat and salt transfer in ice, the formation of superimposed ice and snow ice, and the small-scale dynamics of sea ice. For the ocean, significant advances have been related to exchange processes at the ice-ocean interface, diapycnal mixing, double-diffusive convection, tidal currents and diurnal resonance. Despite this recent progress, some of these small-scale physical processes are still not sufficiently understood: these include wave-turbulence interactions in the atmosphere and ocean, the exchange of heat and salt at the ice-ocean interface, and the mechanical weakening of sea ice. Many other processes are reasonably well understood as stand-alone processes but the challenge is to understand their interactions with and impacts and feedbacks on other processes. Uncertainty in the parameterization of small-scale processes continues to be among the greatest challenges facing climate modelling, particularly in high latitudes. Further improvements in parameterization require new year-round field campaigns on the Arctic sea ice, closely combined with satellite remote sensing studies and numerical model experiments.
NASA Astrophysics Data System (ADS)
Vihma, T.; Pirazzini, R.; Renfrew, I. A.; Sedlar, J.; Tjernström, M.; Nygård, T.; Fer, I.; Lüpkes, C.; Notz, D.; Weiss, J.; Marsan, D.; Cheng, B.; Birnbaum, G.; Gerland, S.; Chechin, D.; Gascard, J. C.
2013-12-01
The Arctic climate system includes numerous highly interactive small-scale physical processes in the atmosphere, sea ice, and ocean. During and since the International Polar Year 2007-2008, significant advances have been made in understanding these processes. Here these advances are reviewed, synthesized and discussed. In atmospheric physics, the primary advances have been in cloud physics, radiative transfer, mesoscale cyclones, coastal and fjordic processes, as well as in boundary-layer processes and surface fluxes. In sea ice and its snow cover, advances have been made in understanding of the surface albedo and its relationships with snow properties, the internal structure of sea ice, the heat and salt transfer in ice, the formation of super-imposed ice and snow ice, and the small-scale dynamics of sea ice. In the ocean, significant advances have been related to exchange processes at the ice-ocean interface, diapycnal mixing, tidal currents and diurnal resonance. Despite this recent progress, some of these small-scale physical processes are still not sufficiently understood: these include wave-turbulence interactions in the atmosphere and ocean, the exchange of heat and salt at the ice-ocean interface, and the mechanical weakening of sea ice. Many other processes are reasonably well understood as stand-alone processes but challenge is to understand their interactions with, and impacts and feedbacks on, other processes. Uncertainty in the parameterization of small-scale processes continues to be among the largest challenges facing climate modeling, and nowhere is this more true than in the Arctic. Further improvements in parameterization require new year-round field campaigns on the Arctic sea ice, closely combined with satellite remote sensing studies and numerical model experiments.
Improving the Predictability of Severe Water Levels along the Coasts of Marginal Seas
NASA Astrophysics Data System (ADS)
Ridder, N. N.; de Vries, H.; van den Brink, H.; De Vries, H.
2016-12-01
Extreme water levels can lead to catastrophic consequences with severe societal and economic repercussions. Particularly vulnerable are countries that are largely situated below sea level. To support and optimize forecast models, as well as future adaptation efforts, this study assesses the modeled contribution of storm surges and astronomical tides to total water levels under different air-sea momentum transfer parameterizations in a numerical surge model (WAQUA/DCSMv5) of the North Sea. It particularly focuses on the implications for the representation of extreme and rapidly recurring severe water levels over the past decades based on the example of the Netherlands. For this, WAQUA/DCSMv5, which is currently used to forecast coastal water levels in the Netherlands, is forced with ERA Interim reanalysis data. Model results are obtained from two different methodologies to parameterize air-sea momentum transfer. The first calculates the governing wind stress forcing using a drag coefficient derived from the conventional approach of wind speed dependent Charnock constants. The other uses instantaneous wind stress from the parameterization of the quasi-linear theory applied within the ECMWF wave model which is expected to deliver a more realistic forcing. The performance of both methods is tested by validating the model output with observations, paying particular attention to their ability to reproduce rapidly succeeding high water levels and extreme events. In a second step, the common features of and connections between these events are analyzed. The results of this study will allow recommendations for the improvement of water level forecasts within marginal seas and support decisions by policy makers. Furthermore, they will strengthen the general understanding of severe and extreme water levels as a whole and help to extend the currently limited knowledge about clustering events.
ARM - Midlatitude Continental Convective Clouds
Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-19
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
ARM - Midlatitude Continental Convective Clouds (comstock-hvps)
Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-06
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
Tokaya, Janot P; Raaijmakers, Alexander J E; Luijten, Peter R; van den Berg, Cornelis A T
2018-04-24
We introduce the transfer matrix (TM) that makes MR-based wireless determination of transfer functions (TFs) possible. TFs are implant specific measures for RF-safety assessment of linear implants. The TF relates an incident tangential electric field on an implant to a scattered electric field at its tip that generally governs local heating. The TM extends this concept and relates an incident tangential electric field to a current distribution in the implant therewith characterizing the RF response along the entire implant. The TM is exploited to measure TFs with MRI without hardware alterations. A model of rightward and leftward propagating attenuated waves undergoing multiple reflections is used to derive an analytical expression for the TM. This allows parameterization of the TM of generic implants, e.g., (partially) insulated single wires, in a homogeneous medium in a few unknowns that simultaneously describe the TF. These unknowns can be determined with MRI making it possible to measure the TM and, therefore, also the TF. The TM is able to predict an induced current due to an incident electric field and can be accurately parameterized with a limited number of unknowns. Using this description the TF is determined accurately (with a Pearson correlation coefficient R ≥ 0.9 between measurements and simulations) from MRI acquisitions. The TM enables measuring of TFs with MRI of the tested generic implant models. The MR-based method does not need hardware alterations and is wireless hence making TF determination in more realistic scenarios conceivable. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Garcia, L; Bedos, C; Génermont, S; Braud, I; Cellier, P
2011-09-01
Ammonia and pesticide volatilization in the field is a surface phenomenon involving physical and chemical processes that depend on the soil surface temperature and water content. The water transfer, heat transfer and energy budget sub models of volatilization models are adapted from the most commonly accepted formalisms and parameterizations. They are less detailed than the dedicated models describing water and heat transfers and surface status. The aim of this work was to assess the ability of one of the available mechanistic volatilization models, Volt'Air, to accurately describe the pedo-climatic conditions of a soil surface at the required time and space resolution. The assessment involves: (i) a sensitivity analysis, (ii) an evaluation of Volt'Air outputs in the light of outputs from a reference Soil-Vegetation-Atmosphere Transfer model (SiSPAT) and three experimental datasets, and (iii) the study of three tests based on modifications of SiSPAT to establish the potential impact of the simplifying assumptions used in Volt'Air. The analysis confirmed that a 5 mm surface layer was well suited, and that Volt'Air surface temperature correlated well with the experimental measurements as well as with SiSPAT outputs. In terms of liquid water transfers, Volt'Air was overall consistent with SiSPAT, with discrepancies only during major rainfall events and dry weather conditions. The tests enabled us to identify the main source of the discrepancies between Volt'Air and SiSPAT: the lack of gaseous water transfer description in Volt'Air. They also helped to explain why neither Volt'Air nor SiSPAT was able to represent lower values of surface water content: current classical water retention and hydraulic conductivity models are not yet adapted to cases of very dry conditions. Given the outcomes of this study, we discuss to what extent the volatilization models can be improved and the questions they pose for current research in water transfer modeling and parameterization. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, Juan A.; Chen, Qingshan; Ringler, Todd
Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less
Zerara, Mohamed; Brickmann, Jürgen; Kretschmer, Robert; Exner, Thomas E
2009-02-01
Quantitative information of solvation and transfer free energies is often needed for the understanding of many physicochemical processes, e.g the molecular recognition phenomena, the transport and diffusion processes through biological membranes and the tertiary structure of proteins. Recently, a concept for the localization and quantification of hydrophobicity has been introduced (Jäger et al. J Chem Inf Comput Sci 43:237-247, 2003). This model is based on the assumptions that the overall hydrophobicity can be obtained as a superposition of fragment contributions. To date, all predictive models for the logP have been parameterized for n-octanol/water (logP(oct)) solvent while very few models with poor predictive abilities are available for other solvents. In this work, we propose a parameterization of an empirical model for n-octanol/water, alkane/water (logP(alk)) and cyclohexane/water (logP(cyc)) systems. Comparison of both logP(alk) and logP(cyc) with the logarithms of brain/blood ratios (logBB) for a set of structurally diverse compounds revealed a high correlation showing their superiority over the logP(oct) measure in this context.
Measured and parameterized energy fluxes estimated for Atlantic transects of RV Polarstern
NASA Astrophysics Data System (ADS)
Bumke, Karl; Macke, Andreas; Kalisch, John; Kleta, Henry
2013-04-01
Even to date energy fluxes over the oceans are difficult to assess. As an example the relative paucity of evaporation observations and the uncertainties of currently employed empirical approaches lead to large uncertainties of evaporation products over the ocean (e.g. Large and Yeager, 2009). Within the frame of OCEANET (Macke et al., 2010) we performed such measurements on Atlantic transects between Bremerhaven (Germany) and Cape Town (South Africa) or Punta Arenas (Chile) onboard RV Polarstern during the recent years. The basic measurements of sensible and latent heat fluxes are inertial-dissipation (e.g. Dupuis et al., 1997) flux estimates and measurements of the bulk variables. Turbulence measurements included a sonic anemometer and an infrared hygrometer, both mounted on the crow's nest. Mean meteorological sensors were those of the ship's operational measurement system. The global radiation and the down terrestrial radiation were measured on the OCEANET container placed on the monkey island. At least about 1000 time series of 1 h length were analyzed to derive bulk transfer coefficients for the fluxes of sensible and latent heat. The bulk transfer coefficients were applied to the ship's meteorological data to derive the heat fluxes at the sea surface. The reflected solar radiation was estimated from measured global radiation. The up terrestrial radiation was derived from the skin temperature according to the Stefan-Boltzmann law. Parameterized heat fluxes were compared to the widely used COARE-parameterization (Fairall et al., 2003), the agreement is excellent. Measured and parameterized heat and radiation fluxes gave the total energy budget at the air sea interface. As expected the mean total flux is positive, but there are also areas, where it is negative, indicating an energy loss of the ocean. It could be shown that the variations in the energy budget are mainly due to insolation and evaporation. A comparison between the mean values of measured and parameterized sensible and latent heat fluxes shows that the data are suitable to validate satellite derived fluxes at the sea surface and re-analysis data. References Dupuis, H., P. K. Taylor, A. Weill, and K. Katsaros, 1997: Inertial dissipation method applied to derive turbulent fluxes over the ocean during the surface of the ocean. J. Geophys. Res., 102 (C9), 21 115-21 129. Fairall, C. W., E. F. Bradley, J. E. Hare, A. A. Grachev, J. B. Edson, 2003: Bulk Parameterization of Air-Sea Fluxes: Updates and Verification for the COARE Algorithm. J. Climate, 16, 571-591. Large, W.G., and S.G. Yeager, 2009: The global climatology of an interannually varying air-sea flux data set. Climate Dynamics 33, 341-364. Macke, A., Kalisch, J., Zoll, Y., and Bumke, K., 2010: Radiative effects of the cloudy atmosphere from ground and satellite based observations, EPJ Web of Conferences, 5 9, 83-94
The potential role of sea spray droplets in facilitating air-sea gas transfer
NASA Astrophysics Data System (ADS)
Andreas, E. L.; Vlahos, P.; Monahan, E. C.
2016-05-01
For over 30 years, air-sea interaction specialists have been evaluating and parameterizing the role of whitecap bubbles in air-sea gas exchange. To our knowledge, no one, however, has studied the mirror image process of whether sea spray droplets can facilitate air-sea gas exchange. We are therefore using theory, data analysis, and numerical modeling to quantify the role of spray on air-sea gas transfer. In this, our first formal work on this subject, we seek the rate-limiting step in spray-mediated gas transfer by evaluating the three time scales that govern the exchange: τ air , which quantifies the rate of transfer between the atmospheric gas reservoir and the surface of the droplet; τ int , which quantifies the exchange rate across the air-droplet interface; and τ aq , which quantifies gas mixing within the aqueous solution droplet.
A scaling theory for linear systems
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Krishnaprasad, P. S.
1980-01-01
A theory of scaling for rational (transfer) functions in terms of transformation groups is developed. Two different four-parameter scaling groups which play natural roles in studying linear systems are identified and the effect of scaling on Fisher information and related statistical measures in system identification are studied. The scalings considered include change of time scale, feedback, exponential scaling, magnitude scaling, etc. The scaling action of the groups studied is tied to the geometry of transfer functions in a rather strong way as becomes apparent in the examination of the invariants of scaling. As a result, the scaling process also provides new insight into the parameterization question for rational functions.
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
Donahue, Aaron S.; Caldwell, Peter M.
2018-02-02
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
NASA Astrophysics Data System (ADS)
Donahue, Aaron S.; Caldwell, Peter M.
2018-02-01
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donahue, Aaron S.; Caldwell, Peter M.
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
DEVELOPMENT OF A LAND-SURFACE MODEL PART I: APPLICATION IN A MESOSCALE METEOROLOGY MODEL
Parameterization of land-surface processes and consideration of surface inhomogeneities are very important to mesoscale meteorological modeling applications, especially those that provide information for air quality modeling. To provide crucial, reliable information on the diurn...
2015-08-20
evapotranspiration (ET) over oceans may be significantly lower than previously thought. The MEP model parameterized turbulent transfer coefficients...fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual evapotranspiration (ET) over...Bras, Jingfeng Wang. A model of evapotranspiration based on the theory of maximum entropy production, Water Resources Research, (03 2011): 0. doi
Parameterized code SHARM-3D for radiative transfer over inhomogeneous surfaces.
Lyapustin, Alexei; Wang, Yujie
2005-12-10
The code SHARM-3D, developed for fast and accurate simulations of the monochromatic radiance at the top of the atmosphere over spatially variable surfaces with Lambertian or anisotropic reflectance, is described. The atmosphere is assumed to be laterally uniform across the image and to consist of two layers with aerosols contained in the bottom layer. The SHARM-3D code performs simultaneous calculations for all specified incidence-view geometries and multiple wavelengths in one run. The numerical efficiency of the current version of code is close to its potential limit and is achieved by means of two innovations. The first is the development of a comprehensive precomputed lookup table of the three-dimensional atmospheric optical transfer function for various atmospheric conditions. The second is the use of a linear kernel model of the land surface bidirectional reflectance factor (BRF) in our algorithm that has led to a fully parameterized solution in terms of the surface BRF parameters. The code is also able to model inland lakes and rivers. The water pixels are described with the Nakajima-Tanaka BRF model of wind-roughened water surface with a Lambertian offset, which is designed to model approximately the reflectance of suspended matter and of a shallow lake or river bottom.
Parameterized code SHARM-3D for radiative transfer over inhomogeneous surfaces
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei; Wang, Yujie
2005-12-01
The code SHARM-3D, developed for fast and accurate simulations of the monochromatic radiance at the top of the atmosphere over spatially variable surfaces with Lambertian or anisotropic reflectance, is described. The atmosphere is assumed to be laterally uniform across the image and to consist of two layers with aerosols contained in the bottom layer. The SHARM-3D code performs simultaneous calculations for all specified incidence-view geometries and multiple wavelengths in one run. The numerical efficiency of the current version of code is close to its potential limit and is achieved by means of two innovations. The first is the development of a comprehensive precomputed lookup table of the three-dimensional atmospheric optical transfer function for various atmospheric conditions. The second is the use of a linear kernel model of the land surface bidirectional reflectance factor (BRF) in our algorithm that has led to a fully parameterized solution in terms of the surface BRF parameters. The code is also able to model inland lakes and rivers. The water pixels are described with the Nakajima-Tanaka BRF model of wind-roughened water surface with a Lambertian offset, which is designed to model approximately the reflectance of suspended matter and of a shallow lake or river bottom.
Radiatively driven stratosphere-troposphere interactions near the tops of tropical cloud clusters
NASA Technical Reports Server (NTRS)
Churchill, Dean D.; Houze, Robert A., Jr.
1990-01-01
Results are presented of two numerical simulations of the mechanism involved in the dehydration of air, using the model of Churchill (1988) and Churchill and Houze (1990) which combines the water and ice physics parameterizations and IR and solar-radiation parameterization with a convective adjustment scheme in a kinematic nondynamic framework. One simulation, a cirrus cloud simulation, was to test the Danielsen (1982) hypothesis of a dehydration mechanism for the stratosphere; the other was to simulate the mesoscale updraft in order to test an alternative mechanism for 'freeze-drying' the air. The results show that the physical processes simulated in the mesoscale updraft differ from those in the thin-cirrus simulation. While in the thin-cirrus case, eddy fluxes occur in response to IR radiative destabilization, and, hence, no net transfer occurs between troposphere and stratosphere, the mesosphere updraft case has net upward mass transport into the lower stratosphere.
Diagnosing the impact of alternative calibration strategies on coupled hydrologic models
NASA Astrophysics Data System (ADS)
Smith, T. J.; Perera, C.; Corrigan, C.
2017-12-01
Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.
Explaining the convector effect in canopy turbulence by means of large-eddy simulation
Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias
2017-06-20
Semi-arid forests are found to sustain a massive sensible heat flux in spite of having a low surface to air temperature difference by lowering the aerodynamic resistance to heat transfer ( r H) – a property called the canopy convector effect (CCE). In this work large-eddy simulations are used to demonstrate that the CCE appears more generally in canopy turbulence. It is indeed a generic feature of canopy turbulence: r H of a canopy is found to reduce with increasing unstable stratification, which effectively increases the aerodynamic roughness for the same physical roughness of the canopy. This relation offers a sufficientmore » condition to construct a general description of the CCE. In addition, we review existing parameterizations for r H from the evapotranspiration literature and test to what extent they are able to capture the CCE, thereby exploring the possibility of an improved parameterization.« less
Numerical Evaluation of Parameter Correlation in the Hartmann-Tran Line Profile
NASA Astrophysics Data System (ADS)
Adkins, Erin M.; Reed, Zachary; Hodges, Joseph T.
2017-06-01
The partially correlated quadratic, speed-dependent hard-collision profile (pCqSDHCP), for simplicity referred to as the Hartmann-Tran profile (HTP), has been recommended as a generalized lineshape for high resolution spectroscopy. The HTP parameterizes complex collisional effects such as Dicke narrowing, speed dependent narrowing, and correlations between velocity-changing and dephasing collisions, while also simplifying to simpler profiles that are widely used, such as the Voigt profile. As advanced lineshape profiles are adopted by more researchers, it is important to understand the limitations that data quality has on the ability to retrieve physically meaningful parameters using sophisticated lineshapes that are fit to spectra of finite signal-to-noise ratio. In this work, spectra were simulated using the HITRAN Application Programming Interface (HAPI) across a full range of line parameters. Simulated spectra were evaluated to quantify the precision with which fitted lineshape parameters can be determined at a given signal-to-noise ratio, focusing on the numerical correlation between the retrieved Dicke narrowing frequency and the velocity-changing and dephasing collisions correlation parameter. Tran, H., N. Ngo, and J.-M. Hartmann, Journal of Quantitative Spectroscopy and Radiative Transfer 2013. 129: p. 89-100. Tennyson, et al., Pure Appl. Chem. 2014, 86: p. 1931-1943. Kochanov, R.V., et al., Journal of Quantitative Spectroscopy and Radiative Transfer 2016. 177: p. 15-30. Tran, H., N. Ngo, and J.-M. Hartmann, Journal of Quantitative Spectroscopy and Radiative Transfer 2013. 129: p. 199-203.
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Dependence of marine stratocumulus reflectivities on liquid water paths
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.; Snider, Jack B.
1990-01-01
Simple parameterizations that relate cloud liquid water content to cloud reflectivity are often used in general circulation climate models to calculate the effect of clouds in the earth's energy budget. Such parameterizations have been developed by Stephens (1978) and by Slingo and Schrecker (1982) and others. Here researchers seek to verify the parametric relationship through the use of simultaneous observations of cloud liquid water content and cloud reflectivity. The column amount of cloud liquid was measured using a microwave radiometer on San Nicolas Island following techniques described by Hogg et al., (1983). Cloud reflectivity was obtained through spatial coherence analysis of Advanced Very High Resolution Radiometer (AVHRR) imagery data (Coakley and Beckner, 1988). They present the dependence of the observed reflectivity on the observed liquid water path. They also compare this empirical relationship with that proposed by Stephens (1978). Researchers found that by taking clouds to be isotropic reflectors, the observed reflectivities and observed column amounts of cloud liquid water are related in a manner that is consistent with simple parameterizations often used in general circulation climate models to determine the effect of clouds on the earth's radiation budget. Attempts to use the results of radiative transfer calculations to correct for the anisotropy of the AVHRR derived reflectivities resulted in a greater scatter of the points about the relationship expected between liquid water path and reflectivity. The anisotropy of the observed reflectivities proved to be small, much smaller than indicated by theory. To critically assess parameterizations, more simultaneous observations of cloud liquid water and cloud reflectivities and better calibration of the AVHRR sensors are needed.
Pedotransfer functions in Earth system science: challenges and perspectives
NASA Astrophysics Data System (ADS)
Van Looy, K.; Minasny, B.; Nemes, A.; Verhoef, A.; Weihermueller, L.; Vereecken, H.
2017-12-01
We make a stronghold for a new generation of Pedotransfer functions (PTFs) that is currently developed in the different disciplines of Earth system science, offering strong perspectives for improvement of integrated process-based models, from local to global scale applications. PTFs are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. To meet the methodological challenges for a successful application in Earth system modeling, we highlight how PTF development needs to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly capture the spatial heterogeneity of soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration and organic carbon content, root density and vegetation water uptake. We present an outlook and stepwise approach to the development of a comprehensive set of PTFs that can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques and soil information availability provide a true breakthrough for this, yet further improvements are necessary in three domains: 1) the determining of unknown relationships and dealing with uncertainty in Earth system modeling; 2) the step of spatially deploying this knowledge with PTF validation at regional to global scales; and 3) the integration and linking of the complex model parameterizations (coupled parameterization). Integration is an achievable goal we will show.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
NASA Astrophysics Data System (ADS)
Charles, T. K.; Paganin, D. M.; Dowd, R. T.
2016-08-01
Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.
A satellite observation test bed for cloud parameterization development
NASA Astrophysics Data System (ADS)
Lebsock, M. D.; Suselj, K.
2015-12-01
We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.
NASA Astrophysics Data System (ADS)
Mogensen, Ditte; Aaltonen, Hermanni; Aalto, Juho; Bäck, Jaana; Kieloaho, Antti-Jussi; Gierens, Rosa; Smolander, Sampo; Kulmala, Markku; Boy, Michael
2015-04-01
Volatile organic compounds (VOCs) are emitted from the biosphere and can work as precursor gases for aerosol particles that can affect the climate (e.g. Makkonen et al., ACP, 2012). VOC emissions from needles and leaves have gained the most attention, however other parts of the ecosystem also have the ability to emit a vast amount of VOCs. This, often neglected, source can be important e.g. at periods where leaves are absent. Both sources and drivers related to forest floor emission of VOCs are currently limited. It is thought that the sources are mainly due to degradation of organic matter (Isidorov and Jdanova, Chemosphere, 2002), living roots (Asensio et al., Soil Biol. Biochem., 2008) and ground vegetation. The drivers are biotic (e.g. microbes) and abiotic (e.g. temperature and moisture). However, the relative importance of the sources and the drivers individually are currently poorly understood. Further, the relative importance of these factors is highly dependent on the tree species occupying the area of interest. The emission of isoprene and monoterpenes where measured from the boreal forest floor at the SMEAR II station in Southern Finland (Hari and Kulmala, Boreal Env. Res., 2005) during the snow-free period in 2010-2012. We used a dynamic method with 3 automated chambers analyzed by Proton Transfer Reaction - Mass Spectrometer (Aaltonen et al., Plant Soil, 2013). Using this data, we have developed empirical parameterizations for the emission of isoprene and monoterpenes from the forest floor. These parameterizations depends on abiotic factors, however, since the parameterizations are based on field measurements, biotic features are captured. Further, we have used the 1D chemistry-transport model SOSAA (Boy et al., ACP, 2011) to test the seasonal relative importance of inclusion of these parameterizations of the forest floor compared to the canopy crown emissions, on the atmospheric reactivity throughout the canopy.
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
Saenz, Juan A.; Chen, Qingshan; Ringler, Todd
2015-05-19
Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K.; Miller, David O.
2005-01-01
Clouds have a powerful influence on atmospheric radiative transfer and hence are crucial to understanding and interpreting the exchange of radiation between the Earth's surface, the atmosphere, and space. Because clouds are highly variable in space, time and physical makeup, it is important to be able to observe them in three dimensions (3-D) with sufficient resolution that the data can be used to generate and validate parameterizations of cloud fields at the resolution scale of global climate models (GCMs). Simulation of photon transport in three dimensionally inhomogeneous cloud fields show that spatial inhomogeneities tend to decrease cloud reflection and absorption and increase direct and diffuse transmission, Therefore it is an important task to characterize cloud spatial structures in three dimensions on the scale of GCM grid elements. In order to validate cloud parameterizations that represent the ensemble, or mean and variance of cloud properties within a GCM grid element, measurements of the parameters must be obtained on a much finer scale so that the statistics on those measurements are truly representative. High spatial sampling resolution is required, on the order of 1 km or less. Since the radiation fields respond almost instantaneously to changes in the cloud field, and clouds changes occur on scales of seconds and less when viewed on scales of approximately 100m, the temporal resolution of cloud properties should be measured and characterized on second time scales. GCM time steps are typically on the order of an hour, but in order to obtain sufficient statistical representations of cloud properties in the parameterizations that are used as model inputs, averaged values of cloud properties should be calculated on time scales on the order of 10-100 s. The Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE) provides exceptional temporal (100 ms) and spatial (30 m) resolution measurements of aerosol and cloud backscatter in three dimensions. HARLIE was used in a ground-based configuration in several recent field campaigns. Principal data products include aerosol backscatter profiles, boundary layer heights, entrainment zone thickness, cloud fraction as a function of altitude and horizontal wind vector profiles based on correlating the motions of clouds and aerosol structures across portions of the scan. Comparisons will be made between various cloud detecting instruments to develop a baseline performance metric.
Influence of current velocity and wind speed on air-water gas exchange in a mangrove estuary
NASA Astrophysics Data System (ADS)
Ho, David T.; Coffineau, Nathalie; Hickman, Benjamin; Chow, Nicholas; Koffman, Tobias; Schlosser, Peter
2016-04-01
Knowledge of air-water gas transfer velocities and water residence times is necessary to study the fate of mangrove derived carbon exported into surrounding estuaries and ultimately to determine carbon balances in mangrove ecosystems. For the first time, the 3He/SF6 dual tracer technique, which has been proven to be a powerful tool to determine gas transfer velocities in the ocean, is applied to Shark River, an estuary situated in the largest contiguous mangrove forest in North America. The mean gas transfer velocity was 3.3 ± 0.2 cm h-1 during the experiment, with a water residence time of 16.5 ± 2.0 days. We propose a gas exchange parameterization that takes into account the major sources of turbulence in the estuary (i.e., bottom generated shear and wind stress).
MATCH: An Atom- Typing Toolset for Molecular Mechanics Force Fields
Yesselman, Joseph D.; Price, Daniel J.; Knight, Jennifer L.; Brooks, Charles L.
2011-01-01
We introduce a toolset of program libraries collectively titled MATCH (Multipurpose Atom-Typer for CHARMM) for the automated assignment of atom types and force field parameters for molecular mechanics simulation of organic molecules. The toolset includes utilities for the conversion from multiple chemical structure file formats into a molecular graph. A general chemical pattern-matching engine using this graph has been implemented whereby assignment of molecular mechanics atom types, charges and force field parameters is achieved by comparison against a customizable list of chemical fragments. While initially designed to complement the CHARMM simulation package and force fields by generating the necessary input topology and atom-type data files, MATCH can be expanded to any force field and program, and has core functionality that makes it extendable to other applications such as fragment-based property prediction. In the present work, we demonstrate the accurate construction of atomic parameters of molecules within each force field included in CHARMM36 through exhaustive cross validation studies illustrating that bond increment rules derived from one force field can be transferred to another. In addition, using leave-one-out substitution it is shown that it is also possible to substitute missing intra and intermolecular parameters with ones included in a force field to complete the parameterization of novel molecules. Finally, to demonstrate the robustness of MATCH and the coverage of chemical space offered by the recent CHARMM CGENFF force field (Vanommeslaeghe, et al., JCC., 2010, 31, 671–690), one million molecules from the PubChem database of small molecules are typed, parameterized and minimized. PMID:22042689
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.
1990-01-01
The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less
NASA Technical Reports Server (NTRS)
Peters-Lidar, Christa D.; Tian, Yudong; Kenneth, Tian; Harrison, Kenneth; Kumar, Sujay
2011-01-01
Land surface modeling and data assimilation can provide dynamic land surface state variables necessary to support physical precipitation retrieval algorithms over land. It is well-known that surface emission, particularly over the range of frequencies to be included in the Global Precipitation Measurement Mission (GPM), is sensitive to land surface states, including soil properties, vegetation type and greenness, soil moisture, surface temperature, and snow cover, density, and grain size. In order to investigate the robustness of both the land surface model states and the microwave emissivity and forward radiative transfer models, we have undertaken a multi-site investigation as part of the NASA Precipitation Measurement Missions (PMM) Land Surface Characterization Working Group. Specifically, we will demonstrate the performance of the Land Information System (LIS; http://lis.gsfc.nasa.gov; Peters-Lidard et aI., 2007; Kumar et al., 2006) coupled to the Joint Center for Satellite Data Assimilation (JCSDA's) Community Radiative Transfer Model (CRTM; Weng, 2007; van Deist, 2009). The land surface is characterized by complex physical/chemical constituents and creates temporally and spatially heterogeneous surface properties in response to microwave radiation scattering. The uncertainties in surface microwave emission (both surface radiative temperature and emissivity) and very low polarization ratio are linked to difficulties in rainfall detection using low-frequency passive microwave sensors (e.g.,Kummerow et al. 2001). Therefore, addressing these issues is of utmost importance for the GPM mission. There are many approaches to parameterizing land surface emission and radiative transfer, some of which have been customized for snow (e.g., the Helsinki University of Technology or HUT radiative transfer model;) and soil moisture (e.g., the Land Surface Microwave Emission Model or LSMEM).
NASA Astrophysics Data System (ADS)
Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph
2012-11-01
Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
NASA Astrophysics Data System (ADS)
Beall, Charlotte M.; Stokes, M. Dale; Hill, Thomas C.; DeMott, Paul J.; DeWald, Jesse T.; Prather, Kimberly A.
2017-07-01
Ice nucleating particles (INPs) influence cloud properties and can affect the overall precipitation efficiency. Developing a parameterization of INPs in global climate models has proven challenging. More INP measurements - including studies of their spatial distribution, sources and sinks, and fundamental freezing mechanisms - must be conducted in order to further improve INP parameterizations. In this paper, an immersion mode INP measurement technique is modified and automated using a software-controlled, real-time image stream designed to leverage optical changes of water droplets to detect freezing events. For the first time, heat transfer properties of the INP measurement technique are characterized using a finite-element-analysis-based heat transfer simulation to improve accuracy of INP freezing temperature measurement. The heat transfer simulation is proposed as a tool that could be used to explain the sources of bias in temperature measurements in INP measurement techniques and ultimately explain the observed discrepancies in measured INP freezing temperatures between different instruments. The simulation results show that a difference of +8.4 °C between the well base temperature and the headspace gas results in an up to 0.6 °C stratification of the aliquot, whereas a difference of +4.2 °C or less results in a thermally homogenous water volume within the error of the thermal probe, ±0.2 °C. The results also show that there is a strong temperature gradient in the immediate vicinity of the aliquot, such that without careful placement of temperature probes, or characterization of heat transfer properties of the water and cooling environment, INP measurements can be biased toward colder temperatures. Based on a modified immersion mode technique, the Automated Ice Spectrometer (AIS), measurements of the standard test dust illite NX are reported and compared against six other immersion mode droplet assay techniques featured in Hiranuma et al. (2015) that used wet suspensions. AIS measurements of illite NX INP freezing temperatures compare reasonably with others, falling within the 5 °C spread in reported spectra. The AIS as well as its characterization of heat transfer properties allows higher confidence in accuracy of freezing temperature measurement, allows higher throughput of sample analysis, and enables disentanglement of the effects of heat transfer rates on sample volumes from time dependence of ice nucleation.
NASA Astrophysics Data System (ADS)
Zakšek, Klemen; Schroedter-Homscheidt, Marion
Some applications, e.g. from traffic or energy management, require air temperature data in high spatial and temporal resolution at two metres height above the ground ( T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (SEVIRI data aboard the MSG and MODIS data aboard Terra and Aqua satellites). The method consists of two parts. First, a downscaling procedure from the SEVIRI pixel resolution of several kilometres to a one kilometre spatial resolution is performed using a regression analysis between the land surface temperature ( LST) and the normalized differential vegetation index ( NDVI) acquired by the MODIS instrument. Second, the lapse rate between the LST and T2m is removed using an empirical parameterization that requires albedo, down-welling surface short-wave flux, relief characteristics and NDVI data. The method was successfully tested for Slovenia, the French region Franche-Comté and southern Germany for the period from May to December 2005, indicating that the parameterization is valid for Central Europe. This parameterization results in a root mean square deviation RMSD of 2.0 K during the daytime with a bias of -0.01 K and a correlation coefficient of 0.95. This is promising, especially considering the high temporal (30 min) and spatial resolution (1000 m) of the results.
NASA Astrophysics Data System (ADS)
Heeb, Peter; Tschanun, Wolfgang; Buser, Rudolf
2012-03-01
A comprehensive and completely parameterized model is proposed to determine the related electrical and mechanical dynamic system response of a voltage-driven capacitive coupled micromechanical switch. As an advantage over existing parameterized models, the model presented in this paper returns within few seconds all relevant system quantities necessary to design the desired switching cycle. Moreover, a sophisticated and detailed guideline is given on how to engineer a MEMS switch. An analytical approach is used throughout the modelling, providing representative coefficients in a set of two coupled time-dependent differential equations. This paper uses an equivalent mass moving along the axis of acceleration and a momentum absorption coefficient. The model describes all the energies transferred: the energy dissipated in the series resistor that models the signal attenuation of the bias line, the energy dissipated in the squeezed film, the stored energy in the series capacitor that represents a fixed separation in the bias line and stops the dc power in the event of a short circuit between the RF and dc path, the energy stored in the spring mechanism, and the energy absorbed by mechanical interaction at the switch contacts. Further, the model determines the electrical power fed back to the bias line. The calculated switching dynamics are confirmed by the electrical characterization of the developed RF switch. The fabricated RF switch performs well, in good agreement with the modelled data, showing a transition time of 7 µs followed by a sequence of bounces. Moreover, the scattering parameters exhibit an isolation in the off-state of >8 dB and an insertion loss in the on-state of <0.6 dB up to frequencies of 50 GHz. The presented model is intended to be integrated into standard circuit simulation software, allowing circuit engineers to design the switch bias line, to minimize induced currents and cross actuation, as well as to find the mechanical structure dimensions necessary for the desired switching time and actuation voltage waveform. Moreover, process related design rules can be automatically verified.
NASA Astrophysics Data System (ADS)
Asay-Davis, X.; Galton-Fenzi, B.; Gwyther, D.; Jourdain, N.; Martin, D. F.; Nakayama, Y.; Seroussi, H. L.
2016-12-01
MISMIP+ (the third Marine Ice Sheet MIP), ISOMIP+ (the second Ice Shelf-Ocean MIP) and MISOMIP1 (the first Marine Ice Sheet-Ocean MIP) prescribe a set of idealized experiments for marine ice-sheet models, ocean models with ice-shelf cavities, and coupled ice sheet-ocean models, respectively. Here, we present results from ISOMIP+ and MISOMIP1 experiments using several ocean-only and coupled ice sheet-ocean models. Among the ocean models, we show that differences in model behavior are significant enough that similar results can only be achieved by tuning model parameters (the heat- and salt-transfer coefficients across the sub-ice-shelf boundary layer) for each model. This tuning is constrained by a desired mean melt rate in quasi-steady state under specified forcing conditions, akin to tuning the models to match observed melt rates. We compare the evolution of ocean temperature transects, melt rate, friction velocity and thermal driving between ocean models for the five ISOMIP+ experiments (Ocean0-4), which have prescribed ice-shelf topography. We find that melt patterns differ between models based on the relative importance of overturning strength and vertical mixing of temperature even when the models have been tuned to achieve similar melt rates near the grounding line. For the two MISOMIP1 experiments (IceOcean1 without dynamic calving and IceOcean2 with a simple calving parameterization), we compare temperature transects, melt rate, ice-shelf topography and grounded area across models and for several model configurations. Consistent with preliminary results from MISMIP+, we find that for a given coupled model, the use of a Coulomb-limited basal friction parameterization below grounded ice and the application of dynamic calving both significantly increase the rate of grounding-line retreat, whereas the rate of retreat appears to be less sensitive to the ice stress approximation (shallow-shelf approximation, higher-order, etc.). We show that models with similar mean melt rates, stress approximations and basal friction parameterizations produce markedly different rates of grounding-line retreat, and we investigate possible sources of these disparities (e.g. differences in coupling strategy or melt distribution).
NASA Technical Reports Server (NTRS)
Massman, William J.
1987-01-01
The semianalytical model outlined in a previous study (Massman, 1987) to describe momentum exchange between the atmosphere and vegetated surfaces is extended to include the exchange of heat. The methods employed are based on one-dimensional turbulent diffusivities, and use analytical solutions to the steady-state diffusion equation. The model is used to assess the influence that the canopy foliage structure and density, the wind profile structure within the canopy, and the shelter factor can have upon the inverse surface Stanton number (kB exp -1), as well as to explore the consequences of introducing a scalar displacement height which can be different from the momentum displacement height. In general, the triangular foliage area density function gives results which agree more closely with observations than that for constant foliage area density. The intended application of this work is for parameterizing the bulk aerodynamic resistances for heat and momentum exchange for use within large-scale models of plant-atmosphere exchanges.
Correcting STIS CCD Point-Source Spectra for CTE Loss
NASA Technical Reports Server (NTRS)
Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus
2006-01-01
We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
Application and evaluation of high-resolution WRF-CMAQ with simple urban parameterization.
The 2-way coupled WRF-CMAQ meteorology and air quality modeling system is evaluated for high-resolution applications by comparing to a regional air quality field study (Discover-AQ). The model was modified to better account for the effects of urban environments. High-resolution...
Application and evaluation of high-resolution WRF-CMAQ with simple urban parameterization
The 2-way coupled WRF-CMAQ meteorology and air quality modeling system is evaluated for high-resolution applications by comparing to a regional air quality field study (Discover-AQ). The model was modified to better account for the effects of urban environments. High-resolution...
Application of a planetary wave breaking parameterization to stratospheric circulation statistics
NASA Technical Reports Server (NTRS)
Randel, William J.; Garcia, Rolando R.
1994-01-01
The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
NASA Astrophysics Data System (ADS)
Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.
2014-12-01
A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Ma, Po-Lun; Xiao, Heng
2013-08-29
The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less
Effects of multiple scattering and surface albedo on the photochemistry of the troposphere
NASA Technical Reports Server (NTRS)
Augustsson, T. R.; Tiwari, S. N.
1981-01-01
The effect of treatment of incoming solar radiation on the photochemistry of the troposphere is discussed. A one dimensional photochemical model of the troposphere containing the species of the nitrogen, oxygen, carbon, hydrogen, and sulfur families was developed. The vertical flux is simulated by use of the parameterized eddy diffusion coefficients. The photochemical model is coupled to a radiative transfer model that calculates the radiation field due to the incoming solar radiation which initiates much of the photochemistry of the troposphere. Vertical profiles of tropospheric species were compared with the Leighton approximation, radiative transfer, matrix inversion model. The radiative transfer code includes the effects of multiple scattering due to molecules and aerosols, pure absorption, and surface albedo on the transfer of incoming solar radiation. It is indicated that significant differences exist for several key photolysis frequencies and species number density profiles between the Leighton approximation and the profiles generated with, radiative transfer, matrix inversion technique. Most species show enhanced vertical profiles when the more realistic treatment of the incoming solar radiation field is included
Variational objective analysis for cyclone studies
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1989-01-01
Significant accomplishments during 1987 to 1988 are summarized with regard to each of the major project components. Model 1 requires satisfaction of two nonlinear horizontal momentum equations, the integrated continuity equation, and the hydrostatic equation. Model 2 requires satisfaction of model 1 plus the thermodynamic equation for a dry atmosphere. Model 3 requires satisfaction of model 2 plus the radiative transfer equation. Model 4 requires satisfaction of model 3 plus a moisture conservation equation and a parameterization for moist processes.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A simplified scheme for computing radiation transfer in the troposphere
NASA Technical Reports Server (NTRS)
Katayama, A.
1973-01-01
A scheme is presented, for the heating of clear and cloudy air by solar and infrared radiation transfer, designed for use in tropospheric general circulation models with coarse vertical resolution. A bulk transmission function is defined for the infrared transfer. The interpolation factors, required for computing the bulk transmission function, are parameterized as functions of such physical parameters as the thickness of the layer, the pressure, and the mixing ratio at a reference level. The computation procedure for solar radiation is significantly simplified by the introduction of two basic concepts. The first is that the solar radiation spectrum can be divided into a scattered part, for which Rayleigh scattering is significant but absorption by water vapor is negligible, and an absorbed part for which absorption by water vapor is significant but Rayleigh scattering is negligible. The second concept is that of an equivalent cloud water vapor amount which absorbs the same amount of radiation as the cloud.
Uncertainty quantification for optical model parameters
Lovell, A. E.; Nunes, F. M.; Sarich, J.; ...
2017-02-21
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of our work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fitmore » and create corresponding 95% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. Here, we study a number of reactions involving neutron and deuteron projectiles with energies in the range of 5–25 MeV/u, on targets with mass A=12–208. We investigate the correlations between the parameters in the fit. The case of deuterons on 12C is discussed in detail: the elastic-scattering fit and the prediction of 12C(d,p) 13C transfer angular distributions, using both uncorrelated and correlated χ 2 minimization functions. The general features for all cases are compiled in a systematic manner to identify trends. This work shows that, in many cases, the correlated χ 2 functions (in comparison to the uncorrelated χ 2 functions) provide a more natural parameterization of the process. These correlated functions do, however, produce broader confidence bands. Further optimization may require improvement in the models themselves and/or more information included in the fit.« less
Impact of anthropogenic aerosols on regional climate change in Beijing, China
NASA Astrophysics Data System (ADS)
Zhao, B.; Liou, K. N.; He, C.; Lee, W. L.; Gu, Y.; Li, Q.; Leung, L. R.
2015-12-01
Anthropogenic aerosols affect regional climate significantly through radiative (direct and semi-direct) and indirect effects, but the magnitude of these effects over megacities are subject to large uncertainty. In this study, we evaluated the effects of anthropogenic aerosols on regional climate change in Beijing, China using the online-coupled Weather Research and Forecasting/Chemistry Model (WRF/Chem) with the Fu-Liou-Gu radiation scheme and a spatial resolution of 4km. We further updated this radiation scheme with a geometric-optics surface-wave (GOS) approach for the computation of light absorption and scattering by black carbon (BC) particles in which aggregation shape and internal mixing properties are accounted for. In addition, we incorporated in WRF/Chem a 3D radiative transfer parameterization in conjunction with high-resolution digital data for city buildings and landscape to improve the simulation of boundary-layer, surface solar fluxes and associated sensible/latent heat fluxes. Preliminary simulated meteorological parameters, fine particles (PM2.5) and their chemical components agree well with observational data in terms of both magnitude and spatio-temporal variations. The effects of anthropogenic aerosols, including BC, on radiative forcing, surface temperature, wind speed, humidity, cloud water path, and precipitation are quantified on the basis of simulation results. With several preliminary sensitivity runs, we found that meteorological parameters and aerosol radiative effects simulated with the incorporation of improved BC absorption and 3-D radiation parameterizations deviate substantially from simulation results using the conventional homogeneous/core-shell configuration for BC and the plane-parallel model for radiative transfer. Understanding of the aerosol effects on regional climate change over megacities must consider the complex shape and mixing state of aerosol aggregates and 3D radiative transfer effects over city landscape.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Stemm, Madison M.; Lust, Nate B.; Foster, Andrew S.; Rojo, Patricio M.; Loredo, Thomas J.
2014-11-01
Multi-wavelength secondary-eclipse and transit depths probe the thermo-chemical properties of exoplanets. In recent years, several research groups have developed retrieval codes to analyze the existing data and study the prospects of future facilities. However, the scientific community has limited access to these packages. Here we premiere the open-source Bayesian Atmospheric Radiative Transfer (BART) code. We discuss the key aspects of the radiative-transfer algorithm and the statistical package. The radiation code includes line databases for all HITRAN molecules, high-temperature H2O, TiO, and VO, and includes a preprocessor for adding additional line databases without recompiling the radiation code. Collision-induced absorption lines are available for H2-H2 and H2-He. The parameterized thermal and molecular abundance profiles can be modified arbitrarily without recompilation. The generated spectra are integrated over arbitrary bandpasses for comparison to data. BART's statistical package, Multi-core Markov-chain Monte Carlo (MC3), is a general-purpose MCMC module. MC3 implements the Differental-evolution Markov-chain Monte Carlo algorithm (ter Braak 2006, 2009). MC3 converges 20-400 times faster than the usual Metropolis-Hastings MCMC algorithm, and in addition uses the Message Passing Interface (MPI) to parallelize the MCMC chains. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
NASA Astrophysics Data System (ADS)
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
Generalized ocean color inversion model for retrieving marine inherent optical properties.
Werdell, P Jeremy; Franz, Bryan A; Bailey, Sean W; Feldman, Gene C; Boss, Emmanuel; Brando, Vittorio E; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J; Lee, ZhongPing; Loisel, Hubert; Maritorena, Stéphane; Mélin, Fréderic; Moore, Timothy S; Smyth, Timothy J; Antoine, David; Devred, Emmanuel; d'Andon, Odile Hembise Fanton; Mangin, Antoine
2013-04-01
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.
Generalized Ocean Color Inversion Model for Retrieving Marine Inherent Optical Properties
NASA Technical Reports Server (NTRS)
Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.; Feldman, Gene C.; Boss, Emmanuel; Brando, Vittorio E.; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J.; Lee, ZhongPing;
2013-01-01
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future ensemble applications.
Offline GCSS Intercomparison of Cloud-Radiation Interaction and Surface Fluxes
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Johnson, D.; Krueger, S.; Zulauf, M.; Donner, L.; Seman, C.; Petch, J.; Gregory, J.
2004-01-01
Simulations of deep tropical clouds by both cloud-resolving models (CRMs) and single-column models (SCMs) in the GEWEX Cloud System Study (GCSS) Working Group 4 (WG4; Precipitating Convective Cloud Systems), Case 2 (19-27 December 1992, TOGA-COARE IFA) have produced large differences in the mean heating and moistening rates (-1 to -5 K and -2 to 2 grams per kilogram respectively). Since the large-scale advective temperature and moisture "forcing" are prescribed for this case, a closer examination of two of the remaining external types of "forcing", namely radiative heating and air/sea hear and moisture transfer, are warranted. This paper examines the current radiation and surface flux of parameterizations used in the cloud models participating in the GCSS WG4, be executing the models "offline" for one time step (12 s) for a prescribed atmospheric state, then examining the surface and radiation fluxes from each model. The dynamic, thermodynamic, and microphysical fluids are provided by the GCE-derived model output for Case 2 during a period of very active deep convection (westerly wind burst). The surface and radiation fluxes produced from the models are then divided into prescribed convective, stratiform, and clear regions in order to examine the role that clouds play in the flux parameterizations. The results suggest that the differences between the models are attributed more to the surface flux parameterizations than the radiation schemes.
Longwave Radiative Flux Calculations in the TOVS Pathfinder Path A Data Set
NASA Technical Reports Server (NTRS)
Mehta, Amita; Susskind, Joel
1999-01-01
A radiative transfer model developed to calculate outgoing longwave radiation (OLR) and downwelling longwave, surface flux (DSF) from the Television and Infrared Operational Satellite (TIROS) Operational Vertical Sounder (TOVS) Pathfinder Path A retrieval products is described. The model covers the spectral range of 2 to 2800 cm in 14 medium medium spectral bands. For each band, transmittances are parameterized as a function of temperature, water vapor, and ozone profiles. The form of the band transmittance parameterization is a modified version of the approach we use to model channel transmittances for the High Resolution Infrared Sounder 2 (HIRS2) instrument. We separately derive effective zenith angle for each spectral band such that band-averaged radiance calculated at that angle best approximates directionally integrated radiance for that band. We develop the transmittance parameterization at these band-dependent effective zenith angles to incorporate directional integration of radiances required in the calculations of OLR and DSF. The model calculations of OLR and DSF are accurate and differ by less than 1% from our line-by-line calculations. Also, the model results are within 1% range of other line-by-line calculations provided by the Intercomparison of Radiation Codes in Climate Models (ICRCCM) project for clear-sky and cloudy conditions. The model is currently used to calculate global, multiyear (1985-1998) OLR and DSF from the TOVS Pathfinder Path A Retrievals.
NASA Astrophysics Data System (ADS)
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-10-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J
2012-10-07
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-01-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587
Deformable image registration using convolutional neural networks
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.
2018-03-01
Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.
NASA Astrophysics Data System (ADS)
Miner, Nadine Elizabeth
1998-09-01
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
Simulation of the Atmospheric Boundary Layer for Wind Energy Applications
NASA Astrophysics Data System (ADS)
Marjanovic, Nikola
Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).
Alternate methodologies to experimentally investigate shock initiation properties of explosives
NASA Astrophysics Data System (ADS)
Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger
2017-01-01
Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.
The parameterization of microchannel-plate-based detection systems
NASA Astrophysics Data System (ADS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.
2016-10-01
The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.
Modeling of Cloud/Radiation Processes for Large-Scale Clouds and Tropical Anvils
1994-05-31
Bergeron- Findeisen process. The saturation vapor pressure over ice is less than 2.4. Radiative transfer parameterization that over water. As a result, ice...nucleation to generate ice dN ) ’- if T>- -20 0C crystals, depositional growth to simulate the T•’= 0j At (3.7) Bergeron- Findeisen process, sublimation...and (0 if T< - 200C. melting of ice crystals, and gravitational settling to deplete the ice crystals. The Bergeron- Findeisen Here, N, +,,, and N, are
NASA Astrophysics Data System (ADS)
Bertram, Sascha; Bechtold, Michel; Hendriks, Rob; Piayda, Arndt; Regina, Kristiina; Myllys, Merja; Tiemeyer, Bärbel
2017-04-01
Peat soils form a major share of soil suitable for agriculture in northern Europe. Successful agricultural production depends on hydrological and pedological conditions, local climate and agricultural management. Climate change impact assessment on food production and development of mitigation and adaptation strategies require reliable yield forecasts under given emission scenarios. Coupled soil hydrology - crop growth models, driven by regionalized future climate scenarios are a valuable tool and widely used for this purpose. Parameterization on local peat soil conditions and crop breed or grassland specie performance, however, remains a major challenge. The subject of this study is to evaluate the performance and sensitivity of the SWAP-WOFOST coupled soil hydrology and plant growth model with respect to the application on peat soils under different regional conditions across northern Europe. Further, the parameterization of region-specific crop and grass species is discussed. First results of the model application and parameterization at deep peat sites in southern Finland are presented. The model performed very well in reproducing two years of observed, daily ground water level data on four hydrologically contrasting sites. Naturally dry and wet sites could be modelled with the same performance as sites with active water table management by regulated drains in order to improve peat conservation. A simultaneous multi-site calibration scheme was used to estimate plant growth parameters of the local oat breed. Cross-site validation of the modelled yields against two years of observations proved the robustness of the chosen parameter set and gave no indication of possible overparameterization. This study proves the suitability of the coupled SWAP-WOFOST model for the prediction of crop yields and water table dynamics of peat soils in agricultural use under given climate conditions.
this report describes the theoretical development, parameterization, and application software of a generalized, community-based, bioaccumulation model called BASS (Bioaccumulation and Aquatic System Simulator).
Improving the realism of hydrologic model through multivariate parameter estimation
NASA Astrophysics Data System (ADS)
Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis
2017-04-01
Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10.1002/2016WR019430
Vertical structure of mean cross-shore currents across a barred surf zone
Haines, John W.; Sallenger, Asbury H.
1994-01-01
Mean cross-shore currents observed across a barred surf zone are compared to model predictions. The model is based on a simplified momentum balance with a turbulent boundary layer at the bed. Turbulent exchange is parameterized by an eddy viscosity formulation, with the eddy viscosity Aυ independent of time and the vertical coordinate. Mean currents result from gradients due to wave breaking and shoaling, and the presence of a mean setup of the free surface. Descriptions of the wave field are provided by the wave transformation model of Thornton and Guza [1983]. The wave transformation model adequately reproduces the observed wave heights across the surf zone. The mean current model successfully reproduces the observed cross-shore flows. Both observations and predictions show predominantly offshore flow with onshore flow restricted to a relatively thin surface layer. Successful application of the mean flow model requires an eddy viscosity which varies horizontally across the surf zone. Attempts are made to parameterize this variation with some success. The data does not discriminate between alternative parameterizations proposed. The overall variability in eddy viscosity suggested by the model fitting should be resolvable by field measurements of the turbulent stresses. Consistent shortcomings of the parameterizations, and the overall modeling effort, suggest avenues for further development and data collection.
Building integral projection models: a user's guide
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P; Coulson, Tim
2014-01-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. PMID:24219157
Development of Surfaces Optically Suitable for Flat Solar Panels
NASA Technical Reports Server (NTRS)
Desmet, D.; Jason, A.; Parr, A.
1977-01-01
Innovations in reflectometry techniques are described; and the development of an absorbing selective coating is discussed along with details of surface properties. Conclusions as to the parameterization desired for practical applications of selective surfaces are provided.
NASA Astrophysics Data System (ADS)
Campoamor-Stursberg, R.
2018-03-01
A procedure for the construction of nonlinear realizations of Lie algebras in the context of Vessiot-Guldberg-Lie algebras of first-order systems of ordinary differential equations (ODEs) is proposed. The method is based on the reduction of invariants and projection of lowest-dimensional (irreducible) representations of Lie algebras. Applications to the description of parameterized first-order systems of ODEs related by contraction of Lie algebras are given. In particular, the kinematical Lie algebras in (2 + 1)- and (3 + 1)-dimensions are realized simultaneously as Vessiot-Guldberg-Lie algebras of parameterized nonlinear systems in R3 and R4, respectively.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
A volumetric conformal mapping approach for clustering white matter fibers in the brain
Gupta, Vikash; Prasad, Gautam; Thompson, Paul
2017-01-01
The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252
Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis
NASA Astrophysics Data System (ADS)
Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.
2016-09-01
Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Hierarchical atom type definitions and extensible all-atom force fields.
Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai
2016-03-15
The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Numerical Modeling of the Global Atmosphere
NASA Technical Reports Server (NTRS)
Arakawa, Akio; Mechoso, Carlos R.
1996-01-01
Under this grant, we continued development and evaluation of the updraft downdraft model for cumulus parameterization. The model includes the mass, rainwater and vertical momentum budget equations for both updrafts and downdrafts. The rainwater generated in an updraft falls partly inside and partly outside the updraft. Two types of stationary solutions are identified for the coupled rainwater budget and vertical momentum equations: (1) solutions for small tilting angles, which are unstable; (2) solutions for large tilting angles, which are stable. In practical applications, we select the smallest stable tilting angle as an optimum value. The model has been incorporated into the Arakawa-Schubert (A-S) cumulus parameterization. The results of semi-prognostic and single-column prognostic tests of the revised A-S parameterization show drastic improvement in predicting the humidity field. Cheng and Arakawa presents the rationale and basic design of the updraft-downdraft model, together with these test results. Cheng and Arakawa, on the other hand gives technical details of the model as implemented in current version of the UCLA GCM.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Xu, F.; Diner, D. J.
2017-12-01
Two perennial problems in applied theoretical and computational radiative transfer (RT) are: (1) the impact of unresolved spatial variability on large-scale fluxes (in climate models) or radiances (in remote sensing); and (2) efficient-yet-accurate estimation of broadband spectral integrals in radiant energy budget estimation as well as in remote sensing, in particular, of trace gases.Generalized RT (GRT) is a modification of classic RT in an optical medium with uniform extinction where Beer's exponential law for direct transmission is replaced by a monotonically decreasing function with a slower power-law decay. In a convenient parameterized version of GRT, mean extinction replaces the uniform value and just one new property is introduced. As a non-dimensional metric for the unresolved variability, we use the square of the mean extinction coefficient divided by its variance. This parameter is also the exponent of the power-law tail of the modified transmission law.This specific form of sub-exponential transmission has explored for almost two decades in application to spatial variability in the presence of long-range correlations, much like in turbulent media such as clouds, with a focus on multiple scattering. It has also been proposed by Conley and Collins (JQSRT, 112, 1525-, 2011) to improve on the standard (weak-line) implementation of the correlated-k technique for efficient spectral integration.We have merged these two applications within a rigorous formulation of the combined problem, and solve the new integral RT equations in the single-scattering limit. The result is illustrated by addressing practical problems in multi-angle remote sensing of aerosols using the O2 A-band, an emerging methodology for passive profiling of coarse aerosols and clouds.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures
Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.
2016-01-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449
A Framework and Toolkit for the Construction of Multimodal Learning Interfaces
1998-04-29
human communication modalities in the context of a broad class of applications, specifically those that support state manipulation via parameterized actions. The multimodal semantic model is also the basis for a flexible, domain independent, incrementally trainable multimodal interpretation algorithm based on a connectionist network. The second major contribution is an application framework consisting of reusable components and a modular, distributed system architecture. Multimodal application developers can assemble the components in the framework into a new application,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustsson, T.R.; Tiwari, S.N.
The effect of treatment of incoming solar radiation on the photochemistry of the troposphere is discussed. A one dimensional photochemical model of the troposphere containing the species of the nitrogen, oxygen, carbon, hydrogen, and sulfur families was developed. The vertical flux is simulated by use of the parameterized eddy diffusion coefficients. The photochemical model is coupled to a radiative transfer model that calculates the radiation field due to the incoming solar radiation which initiates much of the photochemistry of the troposphere. Vertical profiles of tropospheric species were compared with the Leighton approximation, radiative transfer, matrix inversion model. The radiative transfermore » code includes the effects of multiple scattering due to molecules and aerosols, pure absorption, and surface albedo on the transfer of incoming solar radiation. It is indicated that significant differences exist for several key photolysis frequencies and species number density profiles between the Leighton approximation and the profiles generated with, radiative transfer, matrix inversion technique. Most species show enhanced vertical profiles when the more realistic treatment of the incoming solar radiation field is included« less
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
NASA Astrophysics Data System (ADS)
Vorobyov, E. I.
2010-01-01
We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve the performance of α-models in the case of large ξ and even approximately reproduce the mass accretion burst phenomenon, the latter being a signature of the early gravitationally unstable stage of stellar evolution [Vorobyov, E.I., Basu, S., 2006. ApJ 650, 956]. However, further numerical experiments are needed to explore this issue.
Meliga, Stefano C; Coffey, Jacob W; Crichton, Michael L; Flaim, Christopher; Veidt, Martin; Kendall, Mark A F
2017-01-15
In-depth understanding of skin elastic and rupture behavior is fundamental to enable next-generation biomedical devices to directly access areas rich in cells and biomolecules. However, the paucity of skin mechanical characterization and lack of established fracture models limits their rational design. We present an experimental and numerical study of skin mechanics during dynamic interaction with individual and arrays of micro-penetrators. Initially, micro-indentation of individual skin strata revealed hyperelastic moduli were dramatically rate-dependent, enabling extrapolation of stiffness properties at high velocity regimes (>1ms -1 ). A layered finite-element model satisfactorily predicted the penetration of micro-penetrators using characteristic fracture energies (∼10pJμm -2 ) significantly lower than previously reported (≫100pJμm -2 ). Interestingly, with our standard application conditions (∼2ms -1 , 35gpistonmass), ∼95% of the application kinetic energy was transferred to the backing support rather than the skin ∼5% (murine ear model). At higher velocities (∼10ms -1 ) strain energy accumulated in the top skin layers, initiating fracture before stress waves transmitted deformation to the backing material, increasing energy transfer efficiency to 55%. Thus, the tools developed provide guidelines to rationally engineer skin penetrators to increase depth targeting consistency and payload delivery across patients whilst minimizing penetration energy to control skin inflammation, tolerability and acceptability. The mechanics of skin penetration by dynamically-applied microscopic tips is investigated using a combined experimental-computational approach. A FE model of skin is parameterized using indentation tests and a ductile-failure implementation validated against penetration assays. The simulations shed light on skin elastic and fracture properties, and elucidate the interaction with microprojection arrays for vaccine delivery allowing rational design of next-generation devices. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.
2014-06-01
A new heterogeneous ice nucleation parameterization that covers a~wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by ns, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant ns, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new ns parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiranuma, Naruki; Paukert, Marco; Steinke, Isabelle
2014-12-10
A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 °C to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by n s, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RH ice) in the chamber. Our measurementsmore » showed several different pathways to nucleate ice depending on T and RH ice conditions. For instance, almost independent freezing was observed at -60 °C < T < -50 °C, where RH ice explicitly controlled ice nucleation efficiency, while both T and RH ice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant n s, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new n s parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.« less
NASA Astrophysics Data System (ADS)
Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.
2017-12-01
At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.
Parameterizing time in electronic health record studies.
Hripcsak, George; Albers, David J; Perotte, Adler
2015-07-01
Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work properly cited. For commercial re-use, please contact journals.permissions@oup.com.
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less
Wave modeling for the Beaufort and Chukchi Seas
NASA Astrophysics Data System (ADS)
Rogers, W.; Thomson, J.; Shen, H. H.; Posey, P. G.; Hebert, D. A.
2016-02-01
Authors: W. Erick Rogers(1), Jim Thomson(2), Hayley Shen (3), PamelaPosey (1), David Hebert (1) 1 Naval Research Laboratory, Stennis Space Center, Mississippi, USA2 Applied Physics Laboratory, University of Washington, Seattle,Washington, USA3 Clarkson University, Potsdam, New York, USA Abstract : In this presentation, we will discuss the development and application of numerical models for prediction of wind-generated surface gravity waves to the Arctic Ocean, and specifically the Beaufort and Chukchi Seas, for which the Office of Naval Research (ONR) has supported two major field campaigns in 2014 and 2015. The modeling platform is the spectral wave model WAVEWATCH III (R) (WW3). We will begin by reviewing progress with the model numerics in 2007 and 2008 which permits efficient application at high latitudes. Then, we will discuss more recent progress (2012 to 2015) adding new physics to WW3 for ice effects. The latter include two parameterizations for dissipation by turbulence at the ice/water interface, and a more complex parameterization which treat the ice as a viscoelastic fluid. With these new physics, the primary challenge is to find observational data suitable for calibration of the parameterization, and there are concerns about validity of application of any calibration to the wide variety of ice types that exist in the Arctic (or Southern Ocean). Quality of input is another major challenge, for which some recent progress has been made (at least in the context of ice concentration and ice edge) with data assimilative ice modeling at NRL. We will discuss our recent work to invert for dissipation rate using data from a 2012 mooring in the Beaufort Sea, how the results vary by season (ice retreat vs. advance), and what this tells us in context of those complex physical parameterizations used by the model. We will summarize plans for further development of the model, such as adding scattering by floes, through collaboration with IFREMER (France), and improving on the simple "proportional scaling" treatment of the open water source functions in presence of partial ice cover. Finally, we will discuss lessons learned for wave modeling from the autumn 2015 R/V Sikuliaq cruise supported by ONR.
Development of a Global Multilayered Cloud Retrieval System
NASA Technical Reports Server (NTRS)
Huang, J.; Minnis, P.; Lin, B.; Yi, Y.; Ayers, J. K.; Khaiyer, M. M.; Arduini, R.; Fan, T.-F
2004-01-01
A more rigorous multilayered cloud retrieval system has been developed to improve the determination of high cloud properties in multilayered clouds. The MCRS attempts a more realistic interpretation of the radiance field than earlier methods because it explicitly resolves the radiative transfer that would produce the observed radiances. A two-layer cloud model was used to simulate multilayered cloud radiative characteristics. Despite the use of a simplified two-layer cloud reflectance parameterization, the MCRS clearly produced a more accurate retrieval of ice water path than simple differencing techniques used in the past. More satellite data and ground observation have to be used to test the MCRS. The MCRS methods are quite appropriate for interpreting the radiances when the high cloud has a relatively large optical depth (tau(sub I) greater than 2). For thinner ice clouds, a more accurate retrieval might be possible using infrared methods. Selection of an ice cloud retrieval and a variety of other issues must be explored before a complete global application of this technique can be implemented. Nevertheless, the initial results look promising.
On the remote sensing of cloud properties from satellite infrared sounder data
NASA Technical Reports Server (NTRS)
Yeh, H. Y. M.
1984-01-01
A method for remote sensing of cloud parameters by using infrared sounder data has been developed on the basis of the parameterized infrared transfer equation applicable to cloudy atmospheres. The method is utilized for the retrieval of the cloud height, amount, and emissivity in 11 micro m region. Numerical analyses and retrieval experiments have been carried out by utilizing the synthetic sounder data for the theoretical study. The sensitivity of the numerical procedures to the measurement and instrument errors are also examined. The retrieved results are physically discussed and numerically compared with the model atmospheres. Comparisons reveal that the recovered cloud parameters agree reasonably well with the pre-assumed values. However, for cases when relatively thin clouds and/or small cloud fractional cover within a field of view are present, the recovered cloud parameters show considerable fluctuations. Experiments on the proposed algorithm are carried out utilizing High Resolution Infrared Sounder (HIRS/2) data of NOAA 6 and TIROS-N. Results of experiments show reasonably good comparisons with the surface reports and GOES satellite images.
Statistics of surface divergence and their relation to air-water gas transfer velocity
NASA Astrophysics Data System (ADS)
Asher, William E.; Liang, Hanzhuang; Zappa, Christopher J.; Loewen, Mark R.; Mukto, Moniz A.; Litchendorf, Trina M.; Jessup, Andrew T.
2012-05-01
Air-sea gas fluxes are generally defined in terms of the air/water concentration difference of the gas and the gas transfer velocity,kL. Because it is difficult to measure kLin the ocean, it is often parameterized using more easily measured physical properties. Surface divergence theory suggests that infrared (IR) images of the water surface, which contain information concerning the movement of water very near the air-water interface, might be used to estimatekL. Therefore, a series of experiments testing whether IR imagery could provide a convenient means for estimating the surface divergence applicable to air-sea exchange were conducted in a synthetic jet array tank embedded in a wind tunnel. Gas transfer velocities were measured as a function of wind stress and mechanically generated turbulence; laser-induced fluorescence was used to measure the concentration of carbon dioxide in the top 300 μm of the water surface; IR imagery was used to measure the spatial and temporal distribution of the aqueous skin temperature; and particle image velocimetry was used to measure turbulence at a depth of 1 cm below the air-water interface. It is shown that an estimate of the surface divergence for both wind-shear driven turbulence and mechanically generated turbulence can be derived from the surface skin temperature. The estimates derived from the IR images are compared to velocity field divergences measured by the PIV and to independent estimates of the divergence made using the laser-induced fluorescence data. Divergence is shown to scale withkLvalues measured using gaseous tracers as predicted by conceptual models for both wind-driven and mechanically generated turbulence.
Rollover of Apparent Wave Attenuation in Ice Covered Seas
NASA Astrophysics Data System (ADS)
Li, Jingkai; Kohout, Alison L.; Doble, Martin J.; Wadhams, Peter; Guan, Changlong; Shen, Hayley H.
2017-11-01
Wave attenuation from two field experiments in the ice-covered Southern Ocean is examined. Instead of monotonically increasing with shorter waves, the measured apparent attenuation rate peaks at an intermediate wave period. This "rollover" phenomenon has been postulated as the result of wind input and nonlinear energy transfer between wave frequencies. Using WAVEWATCH III®, we first validate the model results with available buoy data, then use the model data to analyze the apparent wave attenuation. With the choice of source parameterizations used in this study, it is shown that rollover of the apparent attenuation exists when wind input and nonlinear transfer are present, independent of the different wave attenuation models used. The period of rollover increases with increasing distance between buoys. Furthermore, the apparent attenuation for shorter waves drops with increasing separation between buoys or increasing wind input. These phenomena are direct consequences of the wind input and nonlinear energy transfer, which offset the damping caused by the intervening ice.
NASA Astrophysics Data System (ADS)
Thiriet, M.; Plesa, A. C.; Breuer, D.; Michaut, C.
2017-12-01
To model the thermal evolution of terrestrial planets, 1D parametrized models are often used as 2 or 3D mantle convection codes are very time-consuming. In these parameterized models, scaling laws that describe the convective heat transfer rate as a function of the convective parameters are derived from 2-3D steady state convection models. However, so far there has been no comprehensive comparison whether they can be applied to model the thermal evolution of a cooling planet. Here we compare 2D and 3D thermal evolution models in the stagnant lid regime with 1D parametrized models and use parameters representing the cooling of the Martian mantle. For the 1D parameterized models, we use the approach of Grasset and Parmentier (1998) and treat the stagnant lid and the convecting layer separately. In the convecting layer, the scaling law for a fluid with constant viscosity is valid with Nu (Ra/Rac) ?, with Rac the critical Rayleigh number at which the thermal boundary layers (TBL) - top or bottom - destabilize. ? varies between 1/3 and 1/4 depending on the heating mode and previous studies have proposed intermediate values of b 0.28-0.32 according to their model set-up. The base of the stagnant lid is defined by the temperature at which the mantle viscosity has increased by a factor of 10; it thus depends on the rate of viscosity change with temperature multiplied by a factor? , whose value appears to vary depending on the geometry and convection conditions. In applying Monte Carlo simulations, we search for the best fit to temperature profiles and heat flux using three free parameters, i.e. ? of the upper TBL, ? and the Rac of the lower TBL. We find that depending on the definition of the stagnant lid thickness in the 2-3D models several combinations of ? and ? for the upper TBL can retrieve suitable fits. E.g. combinations of ? = 0.329 and ? = 2.19 but also ? = 0.295 and ? = 2.97 are possible; Rac of the lower TBL is 10 for all best fits. The results show that although the heating conditions change from bottom to mainly internally heating as a function of time, the thermal evolution can be represented by one set of parameters.
A Simple Lightning Assimilation Technique For Improving Retrospective WRF Simulations
Convective rainfall is often a large source of error in retrospective modeling applications. In particular, positive rainfall biases commonly exist during summer months due to overactive convective parameterizations. In this study, lightning assimilation was applied in the Kain...
A simple lightning assimilation technique for improving retrospective WRF simulations.
Convective rainfall is often a large source of error in retrospective modeling applications. In particular, positive rainfall biases commonly exist during summer months due to overactive convective parameterizations. In this study, lightning assimilation was applied in the Kain-F...
Temporal variability of air-sea CO2 exchange in a low-emission estuary
NASA Astrophysics Data System (ADS)
Mørk, Eva Thorborg; Sejr, Mikael Kristian; Stæhr, Peter Anton; Sørensen, Lise Lotte
2016-07-01
There is the need for further study of whether global estimates of air-sea CO2 exchange in estuarine systems capture the relevant temporal variability and, as such, the temporal variability of bulk parameterized and directly measured CO2 fluxes was investigated in the Danish estuary, Roskilde Fjord. The air-sea CO2 fluxes showed large temporal variability across seasons and between days and that more than 30% of the net CO2 emission in 2013 was a result of two large fall and winter storms. The diurnal variability of ΔpCO2 was up to 400 during summer changing the estuary from a source to a sink of CO2 within the day. Across seasons the system was suggested to change from a sink of atmospheric CO2 during spring to near neutral during summer and later to a source of atmospheric CO2 during fall. Results indicated that Roskilde Fjord was an annual low-emission estuary, with an estimated bulk parameterized release of 3.9 ± 8.7 mol CO2 m-2 y-1 during 2012-2013. It was suggested that the production-respiration balance leading to the low annual emission in Roskilde Fjord, was caused by the shallow depth, long residence time and high water quality in the estuary. In the data analysis the eddy covariance CO2 flux samples were filtered according to the H2Osbnd CO2 cross-sensitivity assessment suggested by Landwehr et al. (2014). This filtering reduced episodes of contradicting directions between measured and bulk parameterized air-sea CO2 exchanges and changed the net air-sea CO2 exchange from an uptake to a release. The CO2 gas transfer velocity was calculated from directly measured CO2 fluxes and ΔpCO2 and agreed to previous observations and parameterizations.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. The combination of reliable cloud microphysics and radar reflectivity may constrain radiative transfer models used in satellite simulators during future missions, including EarthCARE and the NASA Global Precipitation Measurement. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a mid latitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
On the design of decoupling controllers for advanced rotorcraft in the hover case
NASA Technical Reports Server (NTRS)
Fan, M. K. H.; Tits, A.; Barlow, J.; Tsing, N. K.; Tischler, M.; Takahashi, M.
1991-01-01
A methodology for design of helicopter control systems is proposed that can account for various types of concurrent specifications: stability, decoupling between longitudinal and lateral motions, handling qualities, and physical limitations of the swashplate motions. This is achieved by synergistic use of analytical techniques (Q-parameterization of all stabilizing controllers, transfer function interpolation) and advanced numerical optimization techniques. The methodology is used to design a controller for the UH-60 helicopter in hover. Good results are achieved for decoupling and handling quality specifications.
Observations and Thermochemical Calculations for Hot-Jupiter Atmospheres
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver; Cubillos, Patricio; Stemm, Madison
2015-01-01
I present Spitzer eclipse observations for WASP-14b and WASP-43b, an open source tool for thermochemical equilibrium calculations, and components of an open source tool for atmospheric parameter retrieval from spectroscopic data. WASP-14b is a planet that receives high irradiation from its host star, yet, although theory does not predict it, the planet hosts a thermal inversion. The WASP-43b eclipses have signal-to-noise ratios of ~25, one of the largest among exoplanets. To assess these planets' atmospheric composition and thermal structure, we developed an open-source Bayesian Atmospheric Radiative Transfer (BART) code. My dissertation tasks included developing a Thermochemical Equilibrium Abundances (TEA) code, implementing the eclipse geometry calculation in BART's radiative transfer module, and generating parameterized pressure and temperature profiles so the radiative-transfer module can be driven by the statistical module.To initialize the radiative-transfer calculation in BART, TEA calculates the equilibrium abundances of gaseous molecular species at a given temperature and pressure. It uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA, written in Python, is modular, documented, and available to the community via the open-source development site GitHub.com.Support for this work was provided by NASA Headquarters under the NASA Earth and Space Science Fellowship Program, grant NNX12AL83H, by NASA through an award issued by JPL/Caltech, and through the Science Mission Directorate's Planetary Atmospheres Program, grant NNX12AI69G.
An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers
Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.
2016-01-01
Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.
Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts
NASA Astrophysics Data System (ADS)
Wang, M.; Kamarianakis, Y.; Georgescu, M.
2017-12-01
A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.
Effect of gravity waves on the North Atlantic circulation
NASA Astrophysics Data System (ADS)
Eden, Carsten
2017-04-01
The recently proposed IDEMIX (Internal wave Dissipation, Energy and MIXing) parameterisation for the effect of gravity waves offers the possibility to construct consistent ocean models with a closed energy cycle. This means that the energy available for interior mixing in the ocean is only controlled by external energy input from the atmosphere and the tidal system and by internal exchanges. A central difficulty is the unknown fate of meso-scale eddy energy. In different scenarios for that eddy dissipation, the parameterized internal wave field provides between 2 and 3 TW for interior mixing from the total external energy input of about 4 TW, such that a transfer between 0.3 and 0.4 TW into mean potential energy contributes to drive the large-scale circulation in the model. The impact of the different mixing on the meridional overturning in the North Atlantic is discussed and compared to hydrographic observations. Furthermore, the direct energy exchange of the wave field with the geostrophic flow is parameterized in extended IDEMIX versions and the sensitivity of the North Atlantic circulation by this gravity wave drag is discussed.
Haiduke, Roberto Luiz A; Bartlett, Rodney J
2018-05-14
Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.
NASA Astrophysics Data System (ADS)
Haiduke, Roberto Luiz A.; Bartlett, Rodney J.
2018-05-01
Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.
Reflectance from images: a model-based approach for human faces.
Fuchs, Martin; Blanz, Volker; Lensch, Hendrik; Seidel, Hans-Peter
2005-01-01
In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.
Biases in field measurements of ice nuclei concentrations
NASA Astrophysics Data System (ADS)
Garimella, S.; Voigtländer, J.; Kulkarni, G.; Stratmann, F.; Cziczo, D. J.
2015-12-01
Ice nuclei (IN) play an important role in the climate system by influencing cloud properties, precipitation, and radiative transfer. Despite their importance, there are significant uncertainties in estimating IN concentrations because of the complexities of atmospheric ice nucleation processes. Field measurements of IN concentrations with Continuous Flow Diffusion Chamber (CFDC) IN counters have been vital to constrain IN number concentrations and have led to various parameterizations of IN number vs. temperature and particle concentration. These parameterizations are used in many global climate models, which are very sensitive to the treatment of cloud microphysics. However, due to non-idealities in CFDC behavior, especially at high relative humidity, many of these measurements are likely biased too low. In this study, the extent of this low bias is examined with laboratory experiments at a variety of instrument conditions using the SPectrometer for Ice Nucleation, a commercially-available CFDC-style chamber. These laboratory results are compared to theoretical calculations and computational fluid dynamics models to map the variability of this bias as a function of chamber temperature and relative humidity.
Betatron motion with coupling of horizontal and vertical degrees of freedom
Lebedev, V. A.; Bogacz, S. A.
2010-10-21
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V. A.; Bogacz, S. A.
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets
NASA Astrophysics Data System (ADS)
Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.
2018-04-01
Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.
Observations of the directional distribution of the wind energy input function over swell waves
NASA Astrophysics Data System (ADS)
Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.
2016-02-01
Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.
NASA Astrophysics Data System (ADS)
Leckler, F.; Hanafin, J. A.; Ardhuin, F.; Filipot, J.; Anguelova, M. D.; Moat, B. I.; Yelland, M.; Prytherch, J.
2012-12-01
Whitecaps are the main sink of wave energy. Although the exact processes are still unknown, it is clear that they play a significant role in momentum exchange between atmosphere and ocean, and also influence gas and aerosol exchange. Recently, modeling of whitecap properties was implemented in the spectral wave model WAVEWATCH-III ®. This modeling takes place in the context of the Oceanflux-Greenhouse Gas project, to provide a climatology of breaking waves for gas transfer studies. We present here a validation study for two different wave breaking parameterizations implemented in the spectral wave model WAVEWATCH-III ®. The model parameterizations use different approaches related to the steepness of the carrying waves to estimate breaking wave probabilities. That of Ardhuin et al. (2010) is based on the hypothesis that breaking probabilities become significant when the saturation spectrum exceeds a threshold, and includes a modification to allow for greater breaking in the mean wave direction, to agree with observations. It also includes suppression of shorter waves by longer breaking waves. In the second, (Filipot and Ardhuin, 2012) breaking probabilities are defined at different scales using wave steepness, then the breaking wave height distribution is integrated over all scales. We also propose an adaptation of the latter to make it self-consistent. The breaking probabilities parameterized by Filipot and Ardhuin (2012) are much larger for dominant waves than those from the other parameterization, and show better agreement with modeled statistics of breaking crest lengths measured during the FAIRS experiment. This stronger breaking also has an impact on the shorter waves due to the parameterization of short wave damping associated with large breakers, and results in a different distribution of the breaking crest lengths. Converted to whitecap coverage using Reul and Chapron (2003), both parameterizations agree reasonably well with commonly-used empirical fits of whitecap coverage against wind speed (Monahan and Woolf, 1989) and with the global whitecap coverage of Anguelova and Webster (2006), derived from space-borne radiometry. This is mainly due to the fact that the breaking of larger waves in the parametrization by Filipot and Ardhuin (2012) is compensated for by the intense breaking of smaller waves in that of Ardhuin et al. (2010). Comparison with in situ data collected during research ship cruises in the North and South Atlantic (SEASAW, DOGEE and WAGES), and the Norwegian Sea (HiWASE) between 2006 and 2011 also shows good agreement. However, as large scale breakers produce a thicker foam layer, modeled mean foam thickness clearly depends on the scale of the breakers. Foam thickness is thus a more interesting parameter for calibrating and validating breaking wave parameterizations, as the differences in scale can be determined. With this in mind, we present the initial results of validation using an estimation of mean foam thickness using multiple radiometric bands from satellites SMOS and AMSR-E.
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
Building integral projection models: a user's guide.
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P
2014-05-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
Methods for Improving Fine-Scale Applications of the WRF-CMAQ Modeling System
Presentation on the work in AMAD to improve fine-scale (e.g. 4km and 1km) WRF-CMAQ simulations. Includes iterative analysis, updated sea surface temperature and snow cover fields, and inclusion of impervious surface information (urban parameterization).
Measurement and partitioning of evapotranspiration for application to vadose zone studies
USDA-ARS?s Scientific Manuscript database
Partitioning evapotranspiration (ET) into its constituent components, soil evaporation (E) and plant transpiration (T), is important for vadose zone studies because E and T are often parameterized separately. However, partitioning ET is challenging, and many longstanding approaches have significant ...
Multi-sensor Improved Sea-Surface Temperature (MISST) for IOOS - Navy Component
2013-09-30
application and data fusion techniques. 2. Parameterization of IR and MW retrieval differences, with consideration of diurnal warming and cool-skin effects...associated retrieval confidence, standard deviation (STD), and diurnal warming estimates to the application user community in the new GDS 2.0 GHRSST...including coral reefs, ocean modeling in the Gulf of Mexico, improved lake temperatures, numerical data assimilation by ocean models, numerical
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Alley, R. E.; Schieldge, J. P.
1984-01-01
The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.
NASA Technical Reports Server (NTRS)
Molthan, A. L.; Haynes, J. A.; Case, J. L.; Jedlovec, G. L.; Lapenta, W. M.
2008-01-01
As computational power increases, operational forecast models are performing simulations with higher spatial resolution allowing for the transition from sub-grid scale cloud parameterizations to an explicit forecast of cloud characteristics and precipitation through the use of single- or multi-moment bulk water microphysics schemes. investments in space-borne and terrestrial remote sensing have developed the NASA CloudSat Cloud Profiling Radar and the NOAA National Weather Service NEXRAD system, each providing observations related to the bulk properties of clouds and precipitation through measurements of reflectivity. CloudSat and NEXRAD system radars observed light to moderate snowfall in association with a cold-season, midlatitude cyclone traversing the Central United States in February 2007. These systems are responsible for widespread cloud cover and various types of precipitation, are of economic consequence, and pose a challenge to operational forecasters. This event is simulated with the Weather Research and Forecast (WRF) Model, utilizing the NASA Goddard Cumulus Ensemble microphysics scheme. Comparisons are made between WRF-simulated and observed reflectivity available from the CloudSat and NEXRAD systems. The application of CloudSat reflectivity is made possible through the QuickBeam radiative transfer model, with cautious application applied in light of single scattering characteristics and spherical target assumptions. Significant differences are noted within modeled and observed cloud profiles, based upon simulated reflectivity, and modifications to the single-moment scheme are tested through a supplemental WRF forecast that incorporates a temperature dependent snow crystal size distribution.
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
NASA Astrophysics Data System (ADS)
Freitas, S.; Grell, G. A.; Molod, A.
2017-12-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Morozov, Andrew; Petrovskii, Sergei
2013-01-01
Understanding of complex trophic interactions in ecosystems requires correct descriptions of the rate at which predators consume a variety of different prey species. Field and laboratory data on multispecies communities are rarely sufficient and usually cannot provide an unambiguous test for the theory. As a result, the conventional way of constructing a multi-prey functional response is speculative, and often based on assumptions that are difficult to verify. Predator responses allowing for prey selectivity and active switching are thought to be more biologically relevant compared to the standard proportion-based consumption. However, here we argue that the functional responses with switching may not be applicable to communities with a broad spectrum of resource types. We formulate a set of general rules that a biologically sound parameterization of a predator functional response should satisfy, and show that all existing formulations for the multispecies response with prey selectivity and switching fail to do so. Finally, we propose a universal framework for parameterization of a multi-prey functional response by combining patterns of food selectivity and proportion-based feeding. PMID:24086356
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders
The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.
NASA Astrophysics Data System (ADS)
Marion, Giles M.; Farren, Ronald E.
1999-05-01
The Spencer-Møller-Weare (SMW) (1990) model is parameterized for the Na-K-Mg-Ca-Cl-SO 4-H 2O system over the temperature range from -60° to 25°C. This model is one of the few complex chemical equilibrium models for aqueous solutions parameterized for subzero temperatures. The primary focus of the SMW model parameterization and validation deals with chloride systems. There are problems with the sulfate parameterization of the SMW model, most notably with sodium sulfate and magnesium sulfate. The primary objective of this article is to re-estimate the Pitzer-equation parameters governing interactions among sodium, potassium, magnesium, and calcium with sulfate in the SMW model. A mathematical algorithm is developed to estimate 22 temperature-dependent Pitzer-equation parameters. The sodium sulfate reparameterization reduces the overall standard error (SE) from 0.393 with the SMW Pitzer-equation parameters to 0.155. Similarly, the magnesium sulfate reparameterization reduces the SE from 0.335 to 0.124. In addition to the sulfate reparameterization, five additional sulfate minerals are included in the model, which allows a more complete treatment of sulfate chemistry in the Na-K-Mg-Ca-Cl-SO 4-H 2O system. Application of the model to seawater evaporation predicts gypsum precipitation at a seawater concentration factor (SCF) of 3.37 and halite precipitation at an SCF of 10.56, which are in good agreement with previous experimental and theoretical estimates. Application of the model to seawater freezing helps explain the two pathways for seawater freezing. Along the thermodynamically stable "Gitterman pathway," calcium precipitates as gypsum and the seawater eutectic is -36.2°C. Along the metastable "Ringer-Nelson-Thompson pathway," calcium precipitates as antarcticite and the seawater eutectic is -53.8°C.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.
Towards a simple representation of chalk hydrology in land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael
2017-01-01
Modelling and monitoring of hydrological processes in the unsaturated zone of chalk, a porous medium with fractures, is important to optimize water resource assessment and management practices in the United Kingdom (UK). However, incorporating the processes governing water movement through a chalk unsaturated zone in a numerical model is complicated mainly due to the fractured nature of chalk that creates high-velocity preferential flow paths in the subsurface. In general, flow through a chalk unsaturated zone is simulated using the dual-porosity concept, which often involves calibration of a relatively large number of model parameters, potentially undermining applications to large regions. In this study, a simplified parameterization, namely the Bulk Conductivity (BC) model, is proposed for simulating hydrology in a chalk unsaturated zone. This new parameterization introduces only two additional parameters (namely the macroporosity factor and the soil wetness threshold parameter for fracture flow activation) and uses the saturated hydraulic conductivity from the chalk matrix. The BC model is implemented in the Joint UK Land Environment Simulator (JULES) and applied to a study area encompassing the Kennet catchment in the southern UK. This parameterization is further calibrated at the point scale using soil moisture profile observations. The performance of the calibrated BC model in JULES is assessed and compared against the performance of both the default JULES parameterization and the uncalibrated version of the BC model implemented in JULES. Finally, the model performance at the catchment scale is evaluated against independent data sets (e.g. runoff and latent heat flux). The results demonstrate that the inclusion of the BC model in JULES improves simulated land surface mass and energy fluxes over the chalk-dominated Kennet catchment. Therefore, the simple approach described in this study may be used to incorporate the flow processes through a chalk unsaturated zone in large-scale land surface modelling applications.
NASA Astrophysics Data System (ADS)
Zhong, Efang; Li, Qian; Sun, Shufen; Chen, Wen; Chen, Shangfeng; Nath, Debashis
2017-11-01
The presence of light-absorbing aerosols (LAA) in snow profoundly influence the surface energy balance and water budget. However, most snow-process schemes in land-surface and climate models currently do not take this into consideration. To better represent the snow process and to evaluate the impacts of LAA on snow, this study presents an improved snow albedo parameterization in the Snow-Atmosphere-Soil Transfer (SAST) model, which includes the impacts of LAA on snow. Specifically, the Snow, Ice and Aerosol Radiation (SNICAR) model is incorporated into the SAST model with an LAA mass stratigraphy scheme. The new coupled model is validated against in-situ measurements at the Swamp Angel Study Plot (SASP), Colorado, USA. Results show that the snow albedo and snow depth are better reproduced than those in the original SAST, particularly during the period of snow ablation. Furthermore, the impacts of LAA on snow are estimated in the coupled model through case comparisons of the snowpack, with or without LAA. The LAA particles directly absorb extra solar radiation, which accelerates the growth rate of the snow grain size. Meanwhile, these larger snow particles favor more radiative absorption. The average total radiative forcing of the LAA at the SASP is 47.5 W m-2. This extra radiative absorption enhances the snowmelt rate. As a result, the peak runoff time and "snow all gone" day have shifted 18 and 19.5 days earlier, respectively, which could further impose substantial impacts on the hydrologic cycle and atmospheric processes.
A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model
USDA-ARS?s Scientific Manuscript database
Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...
Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure
NASA Technical Reports Server (NTRS)
Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.
2002-01-01
A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.
Loupa, G; Rapsomanikis, S; Trepekli, A; Kourtidis, K
2016-01-15
Energy flux parameterization was effected for the city of Athens, Greece, by utilizing two approaches, the Local-Scale Urban Meteorological Parameterization Scheme (LUMPS) and the Bulk Approach (BA). In situ acquired data are used to validate the algorithms of these schemes and derive coefficients applicable to the study area. Model results from these corrected algorithms are compared with literature results for coefficients applicable to other cities and their varying construction materials. Asphalt and concrete surfaces, canyons and anthropogenic heat releases were found to be the key characteristics of the city center that sustain the elevated surface and air temperatures, under hot, sunny and dry weather, during the Mediterranean summer. A relationship between storage heat flux plus anthropogenic energy flux and temperatures (surface and lower atmosphere) is presented, that results in understanding of the interplay between temperatures, anthropogenic energy releases and the city characteristics under the Urban Heat Island conditions.
NASA Astrophysics Data System (ADS)
Moradi, A.; Smits, K. M.
2014-12-01
A promising energy storage option to compensate for daily and seasonal energy offsets is to inject and store heat generated from renewable energy sources (e.g. solar energy) in the ground, oftentimes referred to as soil borehole thermal energy storage (SBTES). Nonetheless in SBTES modeling efforts, it is widely recognized that the movement of water vapor is closely coupled to thermal processes. However, their mutual interactions are rarely considered in most soil water modeling efforts or in practical applications. The validation of numerical models that are designed to capture these processes is difficult due to the scarcity of experimental data, limiting the testing and refinement of heat and water transfer theories. A common assumption in most SBTES modeling approaches is to consider the soil as a purely conductive medium with constant hydraulic and thermal properties. However, this simplified approach can be improved upon by better understanding the coupled processes at play. Consequently, developing new modeling techniques along with suitable experimental tools to add more complexity in coupled processes has critical importance in obtaining necessary knowledge in efficient design and implementation of SBTES systems. The goal of this work is to better understand heat and mass transfer processes for SBTES. In this study, we implemented a fully coupled numerical model that solves for heat, liquid water and water vapor flux and allows for non-equilibrium liquid/gas phase change. This model was then used to investigate the influence of different hydraulic and thermal parameterizations on SBTES system efficiency. A two dimensional tank apparatus was used with a series of soil moisture, temperature and soil thermal properties sensors. Four experiments were performed with different test soils. Experimental results provide evidences of thermally induced moisture flow that was also confirmed by numerical results. Numerical results showed that for the test conditions applied here, moisture flow is more influenced by thermal gradients rather than hydraulic gradients. The results also demonstrate that convective fluxes are higher compared to conductive fluxes indicating that moisture flow has more contribution to the overall heat flux than conductive fluxes.
NASA Astrophysics Data System (ADS)
Zhao, D.
2012-12-01
The exchange of carbon dioxide across the air-sea interface is an important component of the atmospheric CO2 budget. Understanding how future changes in climate will affect oceanic uptake and releaser CO2 requires accurate estimation of air-sea CO2 flux. This flux is typically expressed as the product of gas transfer velocity, CO2 partial pressure difference in seawater and air, and the CO2 solubility. As the key parameter, gas transfer velocity has long been known to be controlled by the near-surface turbulence in water, which is affected by many factors, such as wind forcing, ocean waves, water-side convection and rainfall. Although the wind forcing is believed as the major factor dominating the near-surface turbulence, many studies have shown that the wind waves and their breaking would greatly enhance turbulence compared with the classical solid wall theory. Gas transfer velocity has been parameterized in terms of wind speed, turbulent kinetic energy dissipation rate, and wave parameters on the basis of observational data or theoretical analysis. However, great discrepancies, as large as one order, exist among these formulas. In this study, we will systematically analyze the differences of gas transfer velocity proposed so far, and try to find the reason that leads to their uncertainties. Finally, a new formula for gas transfer velocity will be given in terms of wind speed and wind wave parameter.
Physics-based distributed snow models in the operational arena: Current and future challenges
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.
2017-12-01
The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.
Pedotransfer Functions in Earth System Science: Challenges and Perspectives
NASA Astrophysics Data System (ADS)
Van Looy, Kris; Bouma, Johan; Herbst, Michael; Koestel, John; Minasny, Budiman; Mishra, Umakant; Montzka, Carsten; Nemes, Attila; Pachepsky, Yakov A.; Padarian, José; Schaap, Marcel G.; Tóth, Brigitta; Verhoef, Anne; Vanderborght, Jan; van der Ploeg, Martine J.; Weihermüller, Lutz; Zacharias, Steffen; Zhang, Yonggen; Vereecken, Harry
2017-12-01
Soil, through its various functions, plays a vital role in the Earth's ecosystems and provides multiple ecosystem services to humanity. Pedotransfer functions (PTFs) are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. In this paper, we review the existing PTFs and document the new generation of PTFs developed in the different disciplines of Earth system science. To meet the methodological challenges for a successful application in Earth system modeling, we emphasize that PTF development has to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly represent the spatial heterogeneity of soils. PTFs should encompass the variability of the estimated soil property or process, in such a way that the estimation of parameters allows for validation and can also confidently provide for extrapolation and upscaling purposes capturing the spatial variation in soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration, and organic carbon content, root density, and vegetation water uptake. Further challenges are to be addressed in parameterization of soil erosivity and land use change impacts at multiple scales. We argue that a comprehensive set of PTFs can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques provide a true breakthrough for this, yet further improvements are necessary for methods to deal with uncertainty and to validate applications at global scale.
Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.
2014-01-01
This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.
A parameterization method and application in breast tomosynthesis dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinhua; Zhang, Da; Liu, Bob
2013-09-15
Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized usingmore » a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA protocol and in the UK, European, and IAEA dosimetry protocols. Microsoft Excel™ spreadsheets are provided for the convenience of readers.« less
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
Sensitivity of CEAP cropland simulations to the parameterization of the APEX model
USDA-ARS?s Scientific Manuscript database
For large scale applications like the U.S. National Scale Conservation Effects Assessment Project (CEAP), soil hydraulic characteristics data are not readily available and therefore need to be estimated. Field soil water properties are commonly approximated using laboratory soil water retention meas...
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Ferentinos, Konstantinos P
2005-09-01
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
Performance analysis of a laser propelled interorbital tansfer vehicle
NASA Technical Reports Server (NTRS)
Minovitch, M. A.
1976-01-01
Performance capabilities of a laser-propelled interorbital transfer vehicle receiving propulsive power from one ground-based transmitter was investigated. The laser transmits propulsive energy to the vehicle during successive station fly-overs. By applying a series of these propulsive maneuvers, large payloads can be economically transferred between low earth orbits and synchronous orbits. Operations involving the injection of large payloads onto escape trajectories are also studied. The duration of each successive engine burn must be carefully timed so that the vehicle reappears over the laser station to receive additional propulsive power within the shortest possible time. The analytical solution for determining these time intervals is presented, as is a solution to the problem of determining maximum injection payloads. Parameteric computer analysis based on these optimization studies is presented. The results show that relatively low beam powers, on the order of 50 MW to 60 MW, produce significant performance capabilities.
NASA Astrophysics Data System (ADS)
Kalitsov, Alan; Okatov, Sergey; Zarzhitsky, Pavel; Chshiev, Mairbek; Velev, Julian; Butler, William; Mryasov, Oleg
2014-03-01
The manipulations of domain wall (DW) in thin ferromagnetic layers by current and the spin-orbit coupling (SOC) have attracted significant interest. We report two band model calculations of the spin torque (ST) and the spin current (SC) at 5d/3d interfaces with head-to-head, Bloch and Neel DWs. These calculations are based on the non-equilibrium Green Function formalism and the tight binding Hamiltonian including the s-d exchange interactions and the Rashba SOC parameterized on the basis of ab-initio calculations for Fe/W, FeCo/Ta and Co/Pt interfaces. We find that SOC significantly modifies the ST and violates relations between the spin transfer torque and the divergence of the spin current. This work was supported in part by a Semiconductor Research Corporation program, sponsored by MARCO and DARPA.
Photographic Image Restoration
NASA Technical Reports Server (NTRS)
Hite, Gerald E.
1991-01-01
Deblurring capabilities would significantly improve the Flight Science Support Office's ability to monitor the effects of lift-off on the shuttle and landing on the orbiter. A deblurring program was written and implemented to extract information from blurred images containing a straight line or edge and to use that information to deblur the image. The program was successfully applied to an image blurred by improper focussing and two blurred by different amounts of blurring. In all cases, the reconstructed modulation transfer function not only had the same zero contours as the Fourier transform of the blurred image but the associated point spread function also had structure not easily described by simple parameterizations. The difficulties posed by the presence of noise in the blurred image necessitated special consideration. An amplitude modification technique was developed for the zero contours of the modulation transfer function at low to moderate frequencies and a smooth filter was used to suppress high frequency noise.
In most ecosystems, atmospheric deposition is the primary input of mercury. The total wet deposition of mercury in atmospheric chemistry models is sensitive to parameterization of the aqueous-phase reduction of divalent oxidized mercury (Hg2+). However, most atmospheric chemistry...
The aquatic ecosystem simulation model AQUATOX was parameterized and applied to Contentnea Creek in the coastal plain of North Carolina to determine the response of fish to moderate levels of physical and chemical habitat alterations. Biomass of four fish groups was most sensiti...
A protocol for parameterization and calibration of RZWQM2 in field research
USDA-ARS?s Scientific Manuscript database
Use of agricultural system models in field research requires a full understanding of both the model and the system it simulates. Since the 1960s, agricultural system models have increased tremendously in their complexity due to greater understanding of the processes simulated, their application to r...
Improving the accuracy and capability of transport and dispersion models in urban areas is essential for current and future urban applications. These models must reflect more realistically the presence and details of urban canopy features. Such features markedly influence the flo...
USDA-ARS?s Scientific Manuscript database
Structure functions are used to study the dissipation and inertial range scales of turbulent energy, to parameterize remote turbulence measurements, and to characterize ramp features in the turbulent field. The ramp features are associated with turbulent coherent structures, which dominate energy a...
The U.S. Environmental Protection Agency (U.S. EPA) is extending its Models-3/Community Multiscale Air Quality (CMAQ) Modeling System to provide detailed gridded air quality concentration fields and sub-grid variability characterization at neighborhood scales and in urban areas...
Air-water gas exchange and CO2 flux in a mangrove-dominated estuary
Ho, David T.; Ferrón, Sara; Engel, Victor C.; Larsen, Laurel G.; Barr, Jordan G.
2014-01-01
Mangrove forests are highly productive ecosystems, but the fate of mangrove-derived carbon remains uncertain. Part of that uncertainty stems from the fact that gas transfer velocities in mangrove-surrounded waters are not well determined, leading to uncertainty in air-water CO2 fluxes. Two SF6 tracer release experiments were conducted to determine gas transfer velocities (k(600) = 8.3 ± 0.4 and 8.1 ± 0.6 cm h−1), along with simultaneous measurements of pCO2 to determine the air-water CO2 fluxes from Shark River, Florida (232.11 ± 23.69 and 171.13 ± 20.28 mmol C m−2 d−1), an estuary within the largest contiguous mangrove forest in North America. The gas transfer velocity results are consistent with turbulent kinetic energy dissipation measurements, indicating a higher rate of turbulence and gas exchange than predicted by commonly used wind speed/gas exchange parameterizations. The results have important implications for carbon fluxes in mangrove ecosystems.
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
PDSparc: A Drop-In Replacement for LEON3 Written Using Synopsys Processor Designer
2015-09-24
Kate Thurmer MIT Lincoln Laboratory, Lexington, MA, USA Distribution A: Public Release ABSTRACT Microprocessors are the...enabled appliances has opened a significant new niche: the Application Specific Standard Product (ASSP) microprocessor . These processors usually start...out as soft-cores that are parameterized at design time to realize exclusively the specific needs of the application. The microprocessor is a small
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Katsaros, Kristina B.
1994-01-01
Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.
Parameterization of daily solar global ultraviolet irradiation.
Feister, U; Jäkel, E; Gericke, K
2002-09-01
Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt. Pinatubo in 1991 have substantially enhanced UVB irradiation in the first half of the 1990s. According to the measurements and calculations, the nonlinear long-term changes observed between 1968 and 2000 amount to +4%, ..., +5% for annual global irradiation and UVA irradiation mainly because of changing cloudiness and + 14%, ..., +15% for UVB and erythemal irradiation because of both changing cloudiness and decreasing column ozone. At the mountain site, Hohenpeissenberg, measured global irradiation and parameterized UVA irradiation decreased during the same time period by -3%, ..., -4%, probably because of the enhanced occurrence and increasing optical thickness of clouds, whereas UVB and erythemal irradiation derived by the parameterization have increased by +3%, ..., +4% because of the combined effect of clouds and decreasing ozone. The parameterizations described here should be applicable to other regions with similar atmospheric and geographic conditions, whereas for regions with significantly different climatic conditions, such as high mountainous areas and arctic or tropical regions, the representativeness of the regression coefficients would have to be approved. It is emphasized here that parameterizations, as the one described in this article, cannot replace measurements of solar UV radiation, but they can use existing measurements of solar global and diffuse radiation as well as data on atmospheric ozone to provide estimates of UV irradiation in regions and over time periods for which UV measurements are not available.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Standardizing Navigation Data: A Status Update
NASA Technical Reports Server (NTRS)
VanEepoel, John M.; Berry, David S.; Pallaschke, Siegmar; Foliard, Jacques; Kiehling, Reinhard; Ogawa, Mina; Showell, Avanaugh; Fertig, Juergen; Castronuovo, Marco
2007-01-01
This paper presents the work of the Navigation Working Group of the Consultative Committee for Space Data Systems (CCSDS) on development of standards addressing the transfer of orbit, attitude and tracking data for space objects. Much progress has been made since the initial presentation of the standards in 2004, including the progression of the orbit data standard to an accepted standard, and the near completion of the attitude and tracking data standards. The orbit, attitude and tracking standards attempt to address predominant parameterizations for their respective data, and create a message format that enables communication of the data across space agencies and other entities. The messages detailed in each standard are built upon a keyword = value paradigm, where a fixed list of keywords is provided in the standard where users specify information about their data, and also use keywords to encapsulate their data. The paper presents a primer on the CCSDS standardization process to put in context the state of the message standards, and the parameterizations supported in each standard, then shows examples of these standards for orbit, attitude and tracking data. Finalization of the standards is expected by the end of calendar year 2007.
The effect of rain characteristics on scavenging rate of tritium-oxide from the atmosphere
NASA Astrophysics Data System (ADS)
Piskunov, V. N.; Golubev, A. V.; Balashov, Yu. S.; Mavrin, S. V.; Golubeva, V. N.; Aleinikov, A. Yu.; Kovalenko, V. P.; Solomatin, I. I.
2012-12-01
The results of field experiments, involving HTO scavenging from the atmosphere by precipitation in the vicinity of HT and HTO emission sources, are presented. The experiments were aimed at obtaining direct experimental data on atmospheric HTO scavenging for a variety of rain characteristics (rain intensity and drop spectra).The most reliable are the calculations of the rate of wash-out with precipitation with the use of the method of integration of the constant exchange for a spectrum of drops. The results of such calculations are in good agreement with the experimental data and can serve as a basis for the generalized parameterization dependences. It is shown that the exact calculation can be replaced by a simpler formula using the mean-value theorem.For the known approximations of the spectra of the rain drops, formulas were obtained to give parameterization dependence of the rate of wash-out Λ on the intensity of precipitation p. This approach can be used for rapid assessment, as well as to determine parameters of wash-out of gases with precipitation in the numerical complexes, which are used for the calculation of the transfer and removal of impurities from the atmosphere.
NASA Astrophysics Data System (ADS)
Wang, D.; Shprits, Y.; Spasojevic, M.; Zhu, H.; Aseev, N.; Drozdov, A.; Kellerman, A. C.
2017-12-01
In situ satellite observations, theoretical studies and model simulations suggested that chorus waves play a significant role in the dynamic evolution of relativistic electrons in the Earth's radiation belts. In this study, we developed new wave frequency and amplitude models that depend on Magnetic Local Time (MLT)-, L-shell, latitude- and geomagnetic conditions indexed by Kp for upper-band and lower-band chorus waves using measurements from the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrument onboard the Van Allen Probes. Utilizing the quasi-linear full diffusion code, we calculated corresponding diffusion coefficients in each MLT sector (1 hour resolution) for upper-band and lower-band chorus waves according to the new developed wave models. Compared with former parameterizations of chorus waves, the new parameterizations result in differences in diffusion coefficients that depend on energy and pitch angle. Utilizing obtained diffusion coefficients, lifetime of energetic electrons is parameterized accordingly. In addition, to investigate effects of obtained diffusion coefficients in different MLT sectors and under different geomagnetic conditions, we performed simulations using four-dimensional Versatile Electron Radiation Belt simulations and validated results against observations.
On Making a Distinguished Vertex Minimum Degree by Vertex Deletion
NASA Astrophysics Data System (ADS)
Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes
For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.
Intercomparison of land-surface parameterizations launched
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Dickinson, R. E.
One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.
A linear geospatial streamflow modeling system for data sparse environments
Asante, Kwabena O.; Arlan, Guleid A.; Pervez, Md Shahriar; Rowland, James
2008-01-01
In many river basins around the world, inaccessibility of flow data is a major obstacle to water resource studies and operational monitoring. This paper describes a geospatial streamflow modeling system which is parameterized with global terrain, soils and land cover data and run operationally with satellite‐derived precipitation and evapotranspiration datasets. Simple linear methods transfer water through the subsurface, overland and river flow phases, and the resulting flows are expressed in terms of standard deviations from mean annual flow. In sample applications, the modeling system was used to simulate flow variations in the Congo, Niger, Nile, Zambezi, Orange and Lake Chad basins between 1998 and 2005, and the resulting flows were compared with mean monthly values from the open‐access Global River Discharge Database. While the uncalibrated model cannot predict the absolute magnitude of flow, it can quantify flow anomalies in terms of relative departures from mean flow. Most of the severe flood events identified in the flow anomalies were independently verified by the Dartmouth Flood Observatory (DFO) and the Emergency Disaster Database (EM‐DAT). Despite its limitations, the modeling system is valuable for rapid characterization of the relative magnitude of flood hazards and seasonal flow changes in data sparse settings.
Prediction of convective activity using a system of parasitic-nested numerical models
NASA Technical Reports Server (NTRS)
Perkey, D. J.
1976-01-01
A limited area, three dimensional, moist, primitive equation (PE) model is developed to test the sensitivity of quantitative precipitation forecasts to the initial relative humidity distribution. Special emphasis is placed on the squall-line region. To accomplish the desired goal, time dependent lateral boundaries and a general convective parameterization scheme suitable for mid-latitude systems were developed. The sequential plume convective parameterization scheme presented is designed to have the versatility necessary in mid-latitudes and to be applicable for short-range forecasts. The results indicate that the scheme is able to function in the frontally forced squallline region, in the gently rising altostratus region ahead of the approaching low center, and in the over-riding region ahead of the warm front. Three experiments are discussed.
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.
2017-08-01
The process of dispersion in porous media is the effect of combined variability in fluid velocity and concentration at scales smaller than the ones resolved that contributes to spreading and mixing. It is usually introduced in textbooks and taught in classes through the Fick-Scheidegger parameterization, which is introduced as a scientific law of universal validity. This parameterization is based on observations in bench-scale laboratory experiments using homogeneous media. Fickian means that dispersive flux is proportional to the gradient of the resolved concentration while the Scheidegger parameterization is a particular way to compute the dispersion coefficients. The unresolved scales are thus associated with the pore-grain geometry that is ignored when the composite pore-grain medium is replaced by a homogeneous continuum. However, the challenge faced in practice is how to account for dispersion in numerical models that discretize the domain into blocks, often cubic meters in size, that contain multiple geologic facies. Although the Fick-Scheidegger parameterization is by far the one most commonly used, its validity has been questioned. This work presents a method of teaching dispersion that emphasizes the physical basis of dispersion and highlights the conditions under which a Fickian dispersion model is justified. In particular, we show that Fickian dispersion has a solid physical basis provided that an equilibrium condition is met. The issue of the Scheidegger parameterization is more complex but it is shown that the approximation that the dispersion coefficients should scale linearly with the mean velocity is often reasonable, at least as a practical approximation, but may not necessarily be always appropriate. Generally in Hydrogeology, the Scheidegger feature of constant dispersivity is considered as a physical law and inseparable from the Fickian model, but both perceptions are wrong. We also explain why Fickian dispersion fails under certain conditions, such as dispersion inside and directly upstream of a contaminant source. Other issues discussed are the relevance of column tests and confusion regarding the meaning of terms dispersion and Fickian.
Automation of a Linear Accelerator Dosimetric Quality Assurance Program
NASA Astrophysics Data System (ADS)
Lebron Gonzalez, Sharon H.
According to the American Society of Radiation Oncology, two-thirds of all cancer patients will receive radiation therapy during their illness with the majority of the treatments been delivered by a linear accelerator (linac). Therefore, quality assurance (QA) procedures must be enforced in order to deliver treatments with a machine in proper conditions. The overall goal of this project is to automate the linac's dosimetric QA procedures by analyzing and accomplishing various tasks. First, the photon beam dosimetry (i.e. total scatter correction factor, infinite percentage depth dose (PDD) and profiles) were parameterized. Parameterization consists of defining the parameters necessary for the specification of a dosimetric quantity model creating a data set that is portable and easy to implement for different applications including: beam modeling data input into a treatment planning system (TPS), comparing measured and TPS modelled data, the QA of a linac's beam characteristics, and the establishment of a standard data set for comparison with other data, etcetera. Second, this parameterization model was used to develop a universal method to determine the radiation field size of flattened (FF), flattening-filter-free (FFF) and wedge beams which we termed the parameterized gradient method (PGM). Third, the parameterized model was also used to develop a profile-based method for assessing the beam quality of photon FF and FFF beams using an ionization chamber array. The PDD and PDD change was also predicted from the measured profile. Lastly, methods were created to automate the multileaf collimator (MLC) calibration and QA procedures as well as the acquisition of the parameters included in monthly and annual photon dosimetric QA. A two field technique was used for the calculation of the MLC leaf relative offsets using an electronic portal imaging device (EPID). A step-and-shoot technique was used to accurately acquire the radiation field size, flatness, symmetry, output and beam quality specifiers in a single delivery to an ionization chamber array for FF and FFF beams.
NASA Astrophysics Data System (ADS)
Chen, Y. H.; Kuo, C. P.; Huang, X.; Yang, P.
2017-12-01
Clouds play an important role in the Earth's radiation budget, and thus realistic and comprehensive treatments of cloud optical properties and cloud-sky radiative transfer are crucial for simulating weather and climate. However, most GCMs neglect LW scattering effects by clouds and tend to use inconsistent cloud SW and LW optical parameterizations. Recently, co-authors of this study have developed a new LW optical properties parameterization for ice clouds, which is based on ice cloud particle statistics from MODIS measurements and state-of-the-art scattering calculation. A two-stream multiple-scattering scheme has also been implemented into the RRTMG_LW, a widely used longwave radiation scheme by climate modeling centers. This study is to integrate both the new LW cloud-radiation scheme for ice clouds and the modified RRTMG_LW with scattering capability into the NCAR CESM to improve the cloud longwave radiation treatment. A number of single column model (SCM) simulations using the observation from the ARM SGP site on July 18 to August 4 in 1995 are carried out to assess the impact of new LW optical properties of clouds and scattering-enabled radiation scheme on simulated radiation budget and cloud radiative effect (CRE). The SCM simulation allows interaction between cloud and radiation schemes with other parameterizations, but the large-scale forcing is prescribed or nudged. Comparing to the results from the SCM of the standard CESM, the new ice cloud optical properties alone leads to an increase of LW CRE by 26.85 W m-2 in average, as well as an increase of the downward LW flux at surface by 6.48 W m-2. Enabling LW cloud scattering further increases the LW CRE by another 3.57 W m-2 and the downward LW flux at the surface by 0.2 W m-2. The change of LW CRE is mainly due to an increase of cloud top height, which enhances the LW CRE. A long-term simulation of CESM will be carried out to further understand the impact of such changes on simulated climates.
Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.
NASA Astrophysics Data System (ADS)
Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.
2017-12-01
A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF demonstrated that CF is improved if SHOC is provided with correct variances of total water and MSE. Consequently, SHOC was modified to include two new prognostic equations for variances of total water and MSE, and coupled with the Chikira-Sugiyama parameterization of deep convection to include effects of detrainment on the prognostic variances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Looy, Kris; Bouma, Johan; Herbst, Michael
Soil, through its various functions, plays a vital role in the Earth's ecosystems and provides multiple ecosystem services to humanity. Pedotransfer functions (PTFs) are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. Here in this article, we review the existing PTFs and document the new generation of PTFs developed in the different disciplines of Earth system science. To meet the methodological challenges for a successful application in Earth system modeling, we emphasize that PTF development has to go hand in hand with suitable extrapolation and upscalingmore » techniques such that the PTFs correctly represent the spatial heterogeneity of soils. PTFs should encompass the variability of the estimated soil property or process, in such a way that the estimation of parameters allows for validation and can also confidently provide for extrapolation and upscaling purposes capturing the spatial variation in soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration, and organic carbon content, root density, and vegetation water uptake. Further challenges are to be addressed in parameterization of soil erosivity and land use change impacts at multiple scales. We argue that a comprehensive set of PTFs can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques provide a true breakthrough for this, yet further improvements are necessary for methods to deal with uncertainty and to validate applications at global scale.« less
Van Looy, Kris; Bouma, Johan; Herbst, Michael; ...
2017-12-28
Soil, through its various functions, plays a vital role in the Earth's ecosystems and provides multiple ecosystem services to humanity. Pedotransfer functions (PTFs) are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. Here in this article, we review the existing PTFs and document the new generation of PTFs developed in the different disciplines of Earth system science. To meet the methodological challenges for a successful application in Earth system modeling, we emphasize that PTF development has to go hand in hand with suitable extrapolation and upscalingmore » techniques such that the PTFs correctly represent the spatial heterogeneity of soils. PTFs should encompass the variability of the estimated soil property or process, in such a way that the estimation of parameters allows for validation and can also confidently provide for extrapolation and upscaling purposes capturing the spatial variation in soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration, and organic carbon content, root density, and vegetation water uptake. Further challenges are to be addressed in parameterization of soil erosivity and land use change impacts at multiple scales. We argue that a comprehensive set of PTFs can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques provide a true breakthrough for this, yet further improvements are necessary for methods to deal with uncertainty and to validate applications at global scale.« less
NASA Astrophysics Data System (ADS)
Felfelani, F.; Pokhrel, Y. N.
2017-12-01
In this study, we use in-situ observations and satellite data of soil moisture and groundwater to improve irrigation and groundwater parameterizations in the version 4.5 of the Community Land Model (CLM). The irrigation application trigger, which is based on the soil moisture deficit mechanism, is enhanced by integrating soil moisture observations and the data from the Soil Moisture Active Passive (SMAP) mission which is available since 2015. Further, we incorporate different irrigation application mechanisms based on schemes used in various other land surface models (LSMs) and carry out a sensitivity analysis using point simulations at two different irrigated sites in Mead, Nebraska where data from the AmeriFlux observational network are available. We then conduct regional simulations over the entire High Plains region and evaluate model results with the available irrigation water use data at the county-scale. Finally, we present results of groundwater simulations by implementing a simple pumping scheme based on our previous studies. Results from the implementation of current irrigation parameterization used in various LSMs show relatively large difference in vertical soil moisture content profile (e.g., 0.2 mm3/mm3) at point scale which is mostly decreased when averaged over relatively large regions (e.g., 0.04 mm3/mm3 in the High Plains region). It is found that original irrigation module in CLM 4.5 tends to overestimate the soil moisture content compared to both point observations and SMAP, and the results from the improved scheme linked with the groundwater pumping scheme show better agreement with the observations.
Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis
NASA Astrophysics Data System (ADS)
Liu, X.; Wu, W.; Yang, Q.
2017-12-01
Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Sartelet, K.; Wu, S.-Y.; Seigneur, C.
2013-02-01
Comprehensive model evaluation and comparison of two 3-D air quality modeling systems (i.e. the Weather Research and Forecast model (WRF)/Polyphemus and WRF with chemistry and the Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) (WRF/Chem-MADRID) are conducted over western Europe. Part 1 describes the background information for the model comparison and simulation design, as well as the application of WRF for January and July 2001 over triple-nested domains in western Europe at three horizontal grid resolutions: 0.5°, 0.125°, and 0.025°. Six simulated meteorological variables (i.e. temperature at 2 m (T2), specific humidity at 2 m (Q2), relative humidity at 2 m (RH2), wind speed at 10 m (WS10), wind direction at 10 m (WD10), and precipitation (Precip)) are evaluated using available observations in terms of spatial distribution, domainwide daily and site-specific hourly variations, and domainwide performance statistics. WRF demonstrates its capability in capturing diurnal/seasonal variations and spatial gradients of major meteorological variables. While the domainwide performance of T2, Q2, RH2, and WD10 at all three grid resolutions is satisfactory overall, large positive or negative biases occur in WS10 and Precip even at 0.025°. In addition, discrepancies between simulations and observations exist in T2, Q2, WS10, and Precip at mountain/high altitude sites and large urban center sites in both months, in particular, during snow events or thunderstorms. These results indicate the model's difficulty in capturing meteorological variables in complex terrain and subgrid-scale meteorological phenomena, due to inaccuracies in model initialization parameterization (e.g. lack of soil temperature and moisture nudging), limitations in the physical parameterizations of the planetary boundary layer (e.g. cloud microphysics, cumulus parameterizations, and ice nucleation treatments) as well as limitations in surface heat and moisture budget parameterizations (e.g. snow-related processes, subgrid-scale surface roughness elements, and urban canopy/heat island treatments and CO2 domes). While the use of finer grid resolutions of 0.125° and 0.025° shows some improvement for WS10, Precip, and some mesoscale events (e.g. strong forced convection and heavy precipitation), it does not significantly improve the overall statistical performance for all meteorological variables except for Precip. These results indicate a need to further improve the model representations of the above parameterizations at all scales.
NASA Astrophysics Data System (ADS)
Baek, Sunghye
2017-07-01
For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.
NASA Astrophysics Data System (ADS)
Miller, S. D.; Freitas, H.; Read, E.; Goulden, M. L.; Rocha, H.
2007-12-01
Gas evasion from Amazonian rivers and lakes to the atmosphere has been estimated to play an important role in the regional budget of carbon dioxide (Richey et al., 2002) and the global budget of methane (Melack et al., 2004). These flux estimates were calculated by combining remote sensing estimates of inundation area with water-side concentration gradients and gas transfer rates (piston velocities) estimated primarily from floating chamber measurements (footprint ~1 m2). The uncertainty in these fluxes was large, attributed primarily to uncertainty in the gas exchange parameterization. Direct measurements of the gas exchange coefficient are needed to improve the parameterizations in these environments, and therefore reduce the uncertainty in fluxes. The micrometeorological technique of eddy covariance is attractive since it is a direct measurement of gas exchange that samples over a much larger area than floating chambers, and is amenable to use from a moving platform. We present eddy covariance carbon dioxide exchange measurements made using a small riverboat in rivers and lakes in the central Amazon near Santarem, Para, Brazil. Water-side carbon dioxide concentration was measured in situ, and the gas exchange coefficient was calculated. We found the piston velocity at a site on the Amazon River to be similar to existing ocean-based parameterizations, whereas the piston velocity at a site on the Tapajos River was roughly a factor 5 higher. We hypothesize that the enhanced gas exchange at the Tapajos site was due to a shallow upwind fetch. Our results demonstrate the feasibility of boat-based eddy covariance on these rivers, and also the utility of a mobile platform to investigate spatial variability of gas exchange.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Astrophysics Data System (ADS)
Ford, William I.; Fox, James F.; Pollock, Erik
2017-08-01
The fate of bioavailable nitrogen species transported through agricultural landscapes remains highly uncertain given complexities of measuring fluxes impacting the fluvial N cycle. We present and test a new numerical model named Technology for Removable Annual Nitrogen in Streams For Ecosystem Restoration (TRANSFER), which aims to reduce model uncertainty due to erroneous parameterization, i.e., equifinality, in stream nitrogen cycle assessment and quantify the significance of transient and permanent removal pathways. TRANSFER couples nitrogen elemental and stable isotope mass-balance equations with existing hydrologic, hydraulic, sediment transport, algal biomass, and sediment organic matter mass-balance subroutines and a robust GLUE-like uncertainty analysis. We test the model in an agriculturally impacted, third-order stream reach located in the Bluegrass Region of Central Kentucky. Results of the multiobjective model evaluation for the model application highlight the ability of sediment nitrogen fingerprints including elemental concentrations and stable N isotope signatures to reduce equifinality of the stream N model. Advancements in the numerical simulations allow for illumination of the significance of algal sloughing fluxes for the first time in relation to denitrification. Broadly, model estimates suggest that denitrification is slightly greater than algal N sloughing (10.7% and 6.3% of dissolved N load on average), highlighting the potential for overestimation of denitrification by 37%. We highlight the significance of the transient N pool given the potential for the N store to be regenerated to the water column in downstream reaches, leading to harmful and nuisance algal bloom development.
The creation of Physiologically Based Pharmacokinetic (PBPK) models for a new chemical requires the selection of an appropriate model structure and the collection of a large amount of data for parameterization. Commonly, a large proportion of the needed information is collected ...
USDA-ARS?s Scientific Manuscript database
DayCent (Daily Century) is a biogeochemical model of intermediate complexity used to simulate flows of carbon and nutrients for crop, grassland, forest, and savanna ecosystems. Required model inputs are: soil texture, current and historical land use, vegetation cover, and daily maximum/minimum tempe...
USDA-ARS?s Scientific Manuscript database
The parameters used for passive soil moisture retrieval algorithms reported in the literature encompass a wide range, leading to a large uncertainty in the applicability of those values. This paper presents an evaluation of the proposed parameterizations of the tau-omega model from 1) SMAP ATBD for ...
USDA-ARS?s Scientific Manuscript database
Application of the Two-Source Energy Balance (TSEB) Model using land surface temperature (LST) requires aerodynamic resistance parameterizations for the flux exchange above the canopy layer, within the canopy air space and at the soil/substrate surface. There are a number of aerodynamic resistance f...
Spectral bidirectional reflectance of Antarctic snow: Measurements and parameterization
NASA Astrophysics Data System (ADS)
Hudson, Stephen R.; Warren, Stephen G.; Brandt, Richard E.; Grenfell, Thomas C.; Six, Delphine
2006-09-01
The bidirectional reflectance distribution function (BRDF) of snow was measured from a 32-m tower at Dome C, at latitude 75°S on the East Antarctic Plateau. These measurements were made at 96 solar zenith angles between 51° and 87° and cover wavelengths 350-2400 nm, with 3- to 30-nm resolution, over the full range of viewing geometry. The BRDF at 900 nm had previously been measured at the South Pole; the Dome C measurement at that wavelength is similar. At both locations the natural roughness of the snow surface causes the anisotropy of the BRDF to be less than that of flat snow. The inherent BRDF of the snow is nearly constant in the high-albedo part of the spectrum (350-900 nm), but the angular distribution of reflected radiance becomes more isotropic at the shorter wavelengths because of atmospheric Rayleigh scattering. Parameterizations were developed for the anisotropic reflectance factor using a small number of empirical orthogonal functions. Because the reflectance is more anisotropic at wavelengths at which ice is more absorptive, albedo rather than wavelength is used as a predictor in the near infrared. The parameterizations cover nearly all viewing angles and are applicable to the high parts of the Antarctic Plateau that have small surface roughness and, at viewing zenith angles less than 55°, elsewhere on the plateau, where larger surface roughness affects the BRDF at larger viewing angles. The root-mean-squared error of the parameterized reflectances is between 2% and 4% at wavelengths less than 1400 nm and between 5% and 8% at longer wavelengths.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
An evaluation of gas transfer velocity parameterizations during natural convection using DNS
NASA Astrophysics Data System (ADS)
Fredriksson, Sam T.; Arneborg, Lars; Nilsson, Hâkan; Zhang, Qi; Handler, Robert A.
2016-02-01
Direct numerical simulations (DNS) of free surface flows driven by natural convection are used to evaluate different methods of estimating air-water gas exchange at no-wind conditions. These methods estimate the transfer velocity as a function of either the horizontal flow divergence at the surface, the turbulent kinetic energy dissipation beneath the surface, the heat flux through the surface, or the wind speed above the surface. The gas transfer is modeled via a passive scalar. The Schmidt number dependence is studied for Schmidt numbers of 7, 150 and 600. The methods using divergence, dissipation and heat flux estimate the transfer velocity well for a range of varying surface heat flux values, and domain depths. The two evaluated empirical methods using wind (in the limit of no wind) give reasonable estimates of the transfer velocity, depending however on the surface heat flux and surfactant saturation. The transfer velocity is shown to be well represented by the expression, ks=A |Bν|1/4 Sc-n, where A is a constant, B is the buoyancy flux, ν is the kinematic viscosity, Sc is the Schmidt number, and the exponent n depends on the water surface characteristics. The results suggest that A=0.39 and n≈1/2 and n≈2/3 for slip and no-slip boundary conditions at the surface, respectively. It is further shown that slip and no-slip boundary conditions predict the heat transfer velocity corresponding to the limits of clean and highly surfactant contaminated surfaces, respectively. This article was corrected on 22 MAR 2016. See the end of the full text for details.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu
This work investigates the promise of a “bottom-up” extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstratemore » that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative “structure” within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.« less
Dunn, Nicholas J H; Noid, W G
2016-05-28
This work investigates the promise of a "bottom-up" extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative "structure" within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.
NASA Astrophysics Data System (ADS)
Mitchell, D. L.
2006-12-01
Sometimes deep physical insights can be gained through the comparison of two theories of light scattering. Comparing van de Hulst's anomalous diffraction approximation (ADA) with Mie theory yielded insights on the behavior of the photon tunneling process that resulted in the modified anomalous diffraction approximation (MADA). (Tunneling is the process by which radiation just beyond a particle's physical cross-section may undergo large angle diffraction or absorption, contributing up to 40% of the absorption when wavelength and particle size are comparable.) Although this provided a means of parameterizing the tunneling process in terms of the real index of refraction and size parameter, it did not predict the efficiency of the tunneling process, where an efficiency of 100% is predicted for spheres by Mie theory. This tunneling efficiency, Tf, depends on particle shape and ranges from 0 to 1.0, with 1.0 corresponding to spheres. Similarly, by comparing absorption efficiencies predicted by the Finite Difference Time Domain Method (FDTD) with efficiencies predicted by MADA, Tf was determined for nine different ice particle shapes, including aggregates. This comparison confirmed that Tf is a strong function of ice crystal shape, including the aspect ratio when applicable. Tf was lowest (< 0.36) for aggregates and plates, and largest (> 0.9) for quasi- spherical shapes. A parameterization of Tf was developed in terms of (1) ice particle shape and (2) mean particle size regarding the large mode (D > 70 mm) of the ice particle size distribution. For the small mode, Tf is only a function of ice particle shape. When this Tf parameterization is used in MADA, absorption and extinction efficiency differences between MADA and FDTD are within 14% over the terrestrial wavelength range 3-100 mm for all size distributions and most crystal shapes likely to be found in cirrus clouds. Using hyperspectral radiances, it is demonstrated that Tf can be retrieved from ice clouds. Since Tf is a function of ice particle shape, this may provide a means of retrieving qualitative information on ice particle shape.
Mihailovic, Dragutin T; Alapaty, Kiran; Podrascanin, Zorica
2009-03-01
Improving the parameterization of processes in the atmospheric boundary layer (ABL) and surface layer, in air quality and chemical transport models. To do so, an asymmetrical, convective, non-local scheme, with varying upward mixing rates is combined with the non-local, turbulent, kinetic energy scheme for vertical diffusion (COM). For designing it, a function depending on the dimensionless height to the power four in the ABL is suggested, which is empirically derived. Also, we suggested a new method for calculating the in-canopy resistance for dry deposition over a vegetated surface. The upward mixing rate forming the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. The vertical eddy diffusivity is parameterized using the mean turbulent velocity scale that is obtained by the vertical integration within the ABL. In-canopy resistance is calculated by integration of inverse turbulent transfer coefficient inside the canopy from the effective ground roughness length to the canopy source height and, further, from its the canopy height. This combination of schemes provides a less rapid mass transport out of surface layer into other layers, during convective and non-convective periods, than other local and non-local schemes parameterizing mixing processes in the ABL. The suggested method for calculating the in-canopy resistance for calculating the dry deposition over a vegetated surface differs remarkably from the commonly used one, particularly over forest vegetation. In this paper, we studied the performance of a non-local, turbulent, kinetic energy scheme for vertical diffusion combined with a non-local, convective mixing scheme with varying upward mixing in the atmospheric boundary layer (COM) and its impact on the concentration of pollutants calculated with chemical and air-quality models. In addition, this scheme was also compared with a commonly used, local, eddy-diffusivity scheme. Simulated concentrations of NO2 by the COM scheme and new parameterization of the in-canopy resistance are closer to the observations when compared to those obtained from using the local eddy-diffusivity scheme. Concentrations calculated with the COM scheme and new parameterization of in-canopy resistance, are in general higher and closer to the observations than those obtained by the local, eddy-diffusivity scheme (on the order of 15-22%). To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO2) were compared for the years 1999 and 2002. The comparison was made for the entire domain used in simulations performed by the chemical European Monitoring and Evaluation Program Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
Air- ice-snow interaction in the Northern Hemisphere under different stability conditions
NASA Astrophysics Data System (ADS)
Repina, Irina; Chechin, Dmitry; Artamonov, Arseny
2013-04-01
The traditional parameterizations of the atmospheric boundary layer are based on similarity theory and the coefficients of turbulent transfer, describing the atmospheric-surface interaction and the diffusion of impurities in the operational models of air pollution, weather forecasting and climate change. Major drawbacks of these parameterizations is that they are not applicable for the extreme conditions of stratification and currents over complex surfaces (such as sea ice, marginal ice zone or stormy sea). These problem could not be overcome within the framework of classical theory, i.e, by rectifying similarity functions or through the introduction of amendments to the traditional turbulent closure schemes. Lack of knowledge on the structure of the surface air layer and the exchange of momentum, heat and moisture between the rippling water surface and the atmosphere at different atmospheric stratifications is at present the major obstacle which impede proper functioning of the operational global and regional weather prediction models and expert models of climate and climate change. This is especially important for the polar regions, where in winter time the development of strong stable boundary layer in the presence of polynyas and leads usually occur. Experimental studies of atmosphere-ice-snow interaction under different stability conditions are presented. Strong stable and unstable conditions are discussed. Parametrizations of turbulent heat and gas exchange at the atmosphere ocean interface are developed. The dependence of the exchange coefficients and aerodynamic roughness on the atmospheric stratification over the snow and ice surface is experimentally confirmed. The drag coefficient is reduced with increasing stability. The behavior of the roughness parameter is simple. This result was obtained in the Arctic from the measurements over hummocked surface. The value of the roughness in the Arctic is much less than that observed over the snow in the middle and even high latitudes of the Northern Hemisphere because the stable conditions above Arctic ice field dominate. Under such conditions the air flow over the uneven surface behaves in the way it does over the even one. This happens because depressions between ridges are filled with heavier air up to the height of irreguralities. As a result, the air moves at the level of ridges without entering depressions. Increased heat and mass transfer over polynyas and leads through self-organization of turbulent convection is found. The work was sponsored by RFBR grants and funded by the Government of the Russian Federation grants.
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thatcher, Diana R.; Jablonowski, Christiane
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
Thatcher, Diana R.; Jablonowski, Christiane
2016-04-04
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
Oelze, Michael L; Mamou, Jonathan
2016-02-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.
Oelze, Michael L.; Mamou, Jonathan
2017-01-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606
Fienen, Michael N.; Doherty, John E.; Hunt, Randall J.; Reeves, Howard W.
2010-01-01
The importance of monitoring networks for resource-management decisions is becoming more recognized, in both theory and application. Quantitative computer models provide a science-based framework to evaluate the efficacy and efficiency of existing and possible future monitoring networks. In the study described herein, two suites of tools were used to evaluate the worth of new data for specific predictions, which in turn can support efficient use of resources needed to construct a monitoring network. The approach evaluates the uncertainty of a model prediction and, by using linear propagation of uncertainty, estimates how much uncertainty could be reduced if the model were calibrated with addition information (increased a priori knowledge of parameter values or new observations). The theoretical underpinnings of the two suites of tools addressing this technique are compared, and their application to a hypothetical model based on a local model inset into the Great Lakes Water Availability Pilot model are described. Results show that meaningful guidance for monitoring network design can be obtained by using the methods explored. The validity of this guidance depends substantially on the parameterization as well; hence, parameterization must be considered not only when designing the parameter-estimation paradigm but also-importantly-when designing the prediction-uncertainty paradigm.
Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.
Goodin, Christopher
2013-05-01
The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.
NASA Astrophysics Data System (ADS)
Torres, Olivier; Braconnot, Pascale; Marti, Olivier; Gential, Luc
2018-05-01
The turbulent fluxes across the ocean/atmosphere interface represent one of the principal driving forces of the global atmospheric and oceanic circulation. Despite decades of effort and improvements, representation of these fluxes still presents a challenge due to the small-scale acting turbulent processes compared to the resolved scales of the models. Beyond this subgrid parameterization issue, a comprehensive understanding of the impact of air-sea interactions on the climate system is still lacking. In this paper we investigates the large-scale impacts of the transfer coefficient used to compute turbulent heat fluxes with the IPSL-CM4 climate model in which the surface bulk formula is modified. Analyzing both atmosphere and coupled ocean-atmosphere general circulation model (AGCM, OAGCM) simulations allows us to study the direct effect and the mechanisms of adjustment to this modification. We focus on the representation of latent heat flux in the tropics. We show that the heat transfer coefficients are highly similar for a given parameterization between AGCM and OAGCM simulations. Although the same areas are impacted in both kind of simulations, the differences in surface heat fluxes are substantial. A regional modification of heat transfer coefficient has more impact than uniform modification in AGCM simulations while in OAGCM simulations, the opposite is observed. By studying the global energetics and the atmospheric circulation response to the modification, we highlight the role of the ocean in dampening a large part of the disturbance. Modification of the heat exchange coefficient modifies the way the coupled system works due to the link between atmospheric circulation and SST, and the different feedbacks between ocean and atmosphere. The adjustment that takes place implies a balance of net incoming solar radiation that is the same in all simulations. As there is no change in model physics other than drag coefficient, we obtain similar latent heat flux between coupled simulations with different atmospheric circulations. Finally, we analyze the impact of model tuning and show that it can offset part of the feedbacks.
NASA Astrophysics Data System (ADS)
Druzhinin, O.; Troitskaya, Yu; Zilitinkevich, S.
2018-01-01
The detailed knowledge of turbulent exchange processes occurring in the atmospheric marine boundary layer are of primary importance for their correct parameterization in large-scale prognostic models. These processes are complicated, especially at sufficiently strong wind forcing conditions, by the presence of sea-spray drops which are torn off the crests of sufficiently steep surface waves by the wind gusts. Natural observations indicate that mass fraction of sea-spray drops increases with wind speed and their impact on the dynamics of the air in the vicinity of the sea surface can become quite significant. Field experiments, however, are limited by insufficient accuracy of the acquired data and are in general costly and difficult. Laboratory modeling presents another route to investigate the spray-mediated exchange processes in much more detail as compared to the natural experiments. However, laboratory measurements, contact as well as Particle Image Velocimetry (PIV) methods, also suffer from inability to resolve the dynamics of the near-surface air-flow, especially in the surface wave troughs. In this report, we present a first attempt to use Direct Numerical Simulation (DNS) as tool for investigation of the drops-mediated momentum, heat and moisture transfer in a turbulent, droplet-laden air flow over a wavy water surface. DNS is capable of resolving the details of the transfer processes and do not involve any closure assumptions typical of Large-Eddy and Reynolds Averaged Navier-Stokes (LES and RANS) simulations. Thus DNS provides a basis for improving parameterizations in LES and RANS closure models and further development of large-scale prognostic models. In particular, we discuss numerical results showing the details of the modification of the air flow velocity, temperature and relative humidity fields by multidisperse, evaporating drops. We use Eulerian-Lagrangian approach where the equations for the air-flow fields are solved in a Eulerian frame whereas the drops dymanics equations are solved in a Largangain frame. The effects of air flow and drops on the water surface wave are neglected. A point-force approximation is employed to model the feed-back contributions by the drops to the air momentum, heat and moisture transfer.
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Using Ground Measurements to Examine the Surface Layer Parameterization Scheme in NCEP GFS
NASA Astrophysics Data System (ADS)
Zheng, W.; Ek, M. B.; Mitchell, K.
2017-12-01
Understanding the behavior and the limitation of the surface layer parameneterization scheme is important for parameterization of surface-atmosphere exchange processes in atmospheric models, accurate prediction of near-surface temperature and identifying the role of different physical processes in contributing to errors. In this study, we examine the surface layer paramerization scheme in the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) using the ground flux measurements including the FLUXNET data. The model simulated surface fluxes, surface temperature and vertical profiles of temperature and wind speed are compared against the observations. The limits of applicability of the Monin-Obukhov similarity theory (MOST), which describes the vertical behavior of nondimensionalized mean flow and turbulence properties within the surface layer, are quantified in daytime and nighttime using the data. Results from unstable regimes and stable regimes are discussed.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
A Parameterized Pattern-Error Objective for Large-Scale Phase-Only Array Pattern Design
2016-03-21
12 4.4 Example 3: Sector Beam w/ Nonuniform Amplitude...fixed uniform amplitude illumination, phase-only optimization can also find application to arrays with fixed but nonuniform tapers. Such fixed tapers...arbitrary element locations nonuniform FFT algorithms exist [43–45] that have the same asymptotic complexity as the conventional FFT, although the
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
USDA-ARS?s Scientific Manuscript database
Application of the Two-Source Energy Balance (TSEB) Model using land surface temperature (LST) requires aerodynamic resistance parameterizations for the flux exchange above the canopy layer, within the canopy air space and at the soil/substrate surface. There are a number of aerodynamic resistance f...
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
NASA Astrophysics Data System (ADS)
Guo, Yamin; Cheng, Jie; Liang, Shunlin
2018-02-01
Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
NASA Technical Reports Server (NTRS)
Carchedi, C. H.; Gough, T. L.; Huston, H. A.
1983-01-01
The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.
Juan-Senabre, Xavier J; Porras, Ignacio; Lallena, Antonio M
2013-06-01
A variation of TG-43 protocol for seeds with cylindrical symmetry aiming at a better description of the radial and anisotropy functions is proposed. The TG-43 two dimensional formalism is modified by introducing a new anisotropy function. Also new fitting functions that permit a more robust description of the radial and anisotropy functions than usual polynomials are studied. The relationship between the new anisotropy function and the anisotropy factor included in the one-dimensional TG-43 formalism is analyzed. The new formalism is tested for the (125)I Nucletron selectSeed brachytherapy source, using Monte Carlo simulations performed with PENELOPE. The goodness of the new parameterizations is discussed. The results obtained indicate that precise fits can be achieved, with a better description than that provided by previous parameterizations. Special care has been taken in the description and fitting of the anisotropy factor near the source. The modified formalism shows advantages with respect to the usual one in the description of the anisotropy functions. The new parameterizations obtained can be easily implemented in the clinical planning calculation systems, provided that the ratio between geometry factors is also modified according to the new dose rate expression. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Analytical probabilistic proton dose calculation and range uncertainties
NASA Astrophysics Data System (ADS)
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
Rapid parameterization of small molecules using the Force Field Toolkit.
Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C
2013-12-15
The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cegla, H. M.; Shelyag, S.; Watson, C. A.
2013-02-15
We outline our techniques to characterize photospheric granulation as an astrophysical noise source. A four-component parameterization of granulation is developed that can be used to reconstruct stellar line asymmetries and radial velocity shifts due to photospheric convective motions. The four components are made up of absorption line profiles calculated for granules, magnetic intergranular lanes, non-magnetic intergranular lanes, and magnetic bright points at disk center. These components are constructed by averaging Fe I 6302 A magnetically sensitive absorption line profiles output from detailed radiative transport calculations of the solar photosphere. Each of the four categories adopted is based on magnetic fieldmore » and continuum intensity limits determined from examining three-dimensional magnetohydrodynamic simulations with an average magnetic flux of 200 G. Using these four-component line profiles we accurately reconstruct granulation profiles, produced from modeling 12 Multiplication-Sign 12 Mm{sup 2} areas on the solar surface, to within {approx} {+-}20 cm s{sup -1} on a {approx}100 m s{sup -1} granulation signal. We have also successfully reconstructed granulation profiles from a 50 G simulation using the parameterized line profiles from the 200 G average magnetic field simulation. This test demonstrates applicability of the characterization to a range of magnetic stellar activity levels.« less
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
NASA Astrophysics Data System (ADS)
Chubarova, Nataly; Zhdanova, Yekaterina; Nezval, Yelena
2016-09-01
A new method for calculating the altitude UV dependence is proposed for different types of biologically active UV radiation (erythemally weighted, vitamin-D-weighted and cataract-weighted types). We show that for the specified groups of parameters the altitude UV amplification (AUV) can be presented as a composite of independent contributions of UV amplification from different factors within a wide range of their changes with mean uncertainty of 1 % and standard deviation of 3 % compared with the exact model simulations with the same input parameters. The parameterization takes into account for the altitude dependence of molecular number density, ozone content, aerosol and spatial surface albedo. We also provide generalized altitude dependencies of the parameters for evaluating the AUV. The resulting comparison of the altitude UV effects using the proposed method shows a good agreement with the accurate 8-stream DISORT model simulations with correlation coefficient r > 0.996. A satisfactory agreement was also obtained with the experimental UV data in mountain regions. Using this parameterization we analyzed the role of different geophysical parameters in UV variations with altitude. The decrease in molecular number density, especially at high altitudes, and the increase in surface albedo play the most significant role in the UV growth. Typical aerosol and ozone altitude UV effects do not exceed 10-20 %. Using the proposed parameterization implemented in the on-line UV tool (http://momsu.ru/uv/) for Northern Eurasia over the PEEX domain we analyzed the altitude UV increase and its possible effects on human health considering different skin types and various open body fraction for January and April conditions in the Alpine region.
Aerosol hygroscopic growth parameterization based on a solute specific coefficient
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.
2011-09-01
Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.
Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank
2017-07-01
Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-02-01
Deep-learning models are highly parameterized, causing difficulty in inference and transfer learning. We propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in DBT while maintaining the classification accuracy. Two-stage transfer learning was used to adapt the ImageNet-trained DCNN to mammography and then to DBT. In the first-stage transfer learning, transfer learning from ImageNet trained DCNN was performed using mammography data. In the second-stage transfer learning, the mammography-trained DCNN was trained on the DBT data using feature extraction from fully connected layer, recursive feature elimination and random forest classification. The layered pathway evolution encapsulates the feature extraction to the classification stages to compress the DCNN. Genetic algorithm was used in an iterative approach with tournament selection driven by count-preserving crossover and mutation to identify the necessary nodes in each convolution layer while eliminating the redundant nodes. The DCNN was reduced by 99% in the number of parameters and 95% in mathematical operations in the convolutional layers. The lesion-based area under the receiver operating characteristic curve on an independent DBT test set from the original and the compressed network resulted in 0.88+/-0.05 and 0.90+/-0.04, respectively. The difference did not reach statistical significance. We demonstrated a DCNN compression approach without additional fine-tuning or loss of performance for classification of masses in DBT. The approach can be extended to other DCNNs and transfer learning tasks. An ensemble of these smaller and focused DCNNs has the potential to be used in multi-target transfer learning.
Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals
NASA Technical Reports Server (NTRS)
Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre
2017-01-01
The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.
NASA Technical Reports Server (NTRS)
Ferrare, R. A.; Whiteman, D. N.; Melfi, S. H.; Evans, K. D.; Holben, B. N.
1995-01-01
The first Atmospheric Radiation Measurement (ARM) Remote Cloud Study (RCS) Intensive Operations Period (IOP) was held during April 1994 at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site near Lamont, Oklahoma. This experiment was conducted to evaluate and calibrate state-of-the-art, ground based remote sensing instruments and to use the data acquired by these instruments to validate retrieval algorithms developed under the ARM program. These activities are part of an overall plan to assess general circulation model (GCM) parameterization research. Since radiation processes are one of the key areas included in this parameterization research, measurements of water vapor and aerosols are required because of the important roles these atmospheric constituents play in radiative transfer. Two instruments were deployed during this IOP to measure water vapor and aerosols and study their relationship. The NASA/Goddard Space Flight Center (GSFC) Scanning Raman Lidar (SRL) acquired water vapor and aerosol profile data during 15 nights of operations. The lidar acquired vertical profiles as well as nearly horizontal profiles directed near an instrumented 60 meter tower. Aerosol optical thickness, phase function, size distribution, and integrated water vapor were derived from measurements with a multiband automatic sun and sky scanning radiometer deployed at this site.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
A general science-based framework for dynamical spatio-temporal models
Wikle, C.K.; Hooten, M.B.
2010-01-01
Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.
Obtaining sub-daily new snow density from automated measurements in high mountain regions
NASA Astrophysics Data System (ADS)
Helfricht, Kay; Hartl, Lea; Koch, Roland; Marty, Christoph; Olefs, Marc
2018-05-01
The density of new snow is operationally monitored by meteorological or hydrological services at daily time intervals, or occasionally measured in local field studies. However, meteorological conditions and thus settling of the freshly deposited snow rapidly alter the new snow density until measurement. Physically based snow models and nowcasting applications make use of hourly weather data to determine the water equivalent of the snowfall and snow depth. In previous studies, a number of empirical parameterizations were developed to approximate the new snow density by meteorological parameters. These parameterizations are largely based on new snow measurements derived from local in situ measurements. In this study a data set of automated snow measurements at four stations located in the European Alps is analysed for several winter seasons. Hourly new snow densities are calculated from the height of new snow and the water equivalent of snowfall. Considering the settling of the new snow and the old snowpack, the average hourly new snow density is 68 kg m-3, with a standard deviation of 9 kg m-3. Seven existing parameterizations for estimating new snow densities were tested against these data, and most calculations overestimate the hourly automated measurements. Two of the tested parameterizations were capable of simulating low new snow densities observed at sheltered inner-alpine stations. The observed variability in new snow density from the automated measurements could not be described with satisfactory statistical significance by any of the investigated parameterizations. Applying simple linear regressions between new snow density and wet bulb temperature based on the measurements' data resulted in significant relationships (r2 > 0.5 and p ≤ 0.05) for single periods at individual stations only. Higher new snow density was calculated for the highest elevated and most wind-exposed station location. Whereas snow measurements using ultrasonic devices and snow pillows are appropriate for calculating station mean new snow densities, we recommend instruments with higher accuracy e.g. optical devices for more reliable investigations of the variability of new snow densities at sub-daily intervals.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
A frequentist approach to computer model calibration
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
2016-05-05
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Measurements of pore-scale flow through apertures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chojnicki, Kirsten
Pore-scale aperture effects on flow in pore networks was studied in the laboratory to provide a parameterization for use in transport models. Four cases were considered: regular and irregular pillar/pore alignment with and without an aperture. The velocity field of each case was measured and simulated, providing quantitatively comparable results. Two aperture effect parameterizations were considered: permeability and transmission. Permeability values varied by an order of magnitude between the cases with and without apertures. However, transmission did not correlate with permeability. Despite having much greater permeability the regular aperture case permitted less transmission than the regular case. Moreover, both irregularmore » cases had greater transmission than the regular cases, a difference not supported by the permeabilities. Overall, these findings suggest that pore-scale aperture effects on flow though a pore-network may not be adequately captured by properties such as permeability for applications that are interested in determining particle transport volume and timing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
The True- and Eccentric-Anomaly Parameterizations of the Perturbed Kepler Motion
NASA Astrophysics Data System (ADS)
Gergely, László Á.; Perjés, Zoltán I.; Vasúth, Mátyás
2000-01-01
The true- and eccentric-anomaly parameterizations of the Kepler motion are generalized to quasi-periodic orbits, by considering perturbations of the radial part of the kinetic energy in the form of a series of negative powers of the orbital radius. A toolbox of methods for averaging observables as functions of the energy E and angular momentum L is developed. A broad range of systems governed by the generic Brumberg force, as well as recent applications in the theory of gravitational radiation, involve integrals of these functions over a period of motion. These integrals are evaluated by using the residue theorem. In the course of this work, two important questions emerge: (1) When do the true- and eccentric-anomaly parameters exist? (2) Under what circumstances, and why, are the poles in the origin? The purpose of this paper is to find the answer to these queries.
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications, the covariance function depends primarily on...case of missing values at the compiled time series, the gaps were filled by weighted interpolation. The weights depend on the number of the...averaging, in order to create the continuous time series, filters out the dependency on the instantaneous meteorological and oceanographic conditions
Linkage of mike she to wetland-dndc for carbon budgeting and anaerobic biogeochemistry simulation
Jianbo Cui; Changsheng Li; Ge Sun; Carl Trettin
2005-01-01
This study reports the linkage between MIKE SHE and Wetland-DNDC for carbon dynamics and greenhouse gases (GHGs) emissions simulation in forested wetland.Wet1and-DNDC was modified by parameterizing management measures, refining anaerobic biogeochemical processes, and was linked to the hydrological model - MIKE SHE. As a preliminary application, we simulated the effect...
Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.
2016-01-01
A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229
ExaSAT: An exascale co-design tool for performance modeling
Unat, Didem; Chan, Cy; Zhang, Weiqun; ...
2015-02-09
One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
NASA Astrophysics Data System (ADS)
Berloff, P. S.
2016-12-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
A method to analyze molecular tagging velocimetry data using the Hough transform.
Sanchez-Gonzalez, R; McManamen, B; Bowersox, R D W; North, S W
2015-10-01
The development of a method to analyze molecular tagging velocimetry data based on the Hough transform is presented. This method, based on line fitting, parameterizes the grid lines "written" into a flowfield. Initial proof-of-principle illustration of this method was performed to obtain two-component velocity measurements in the wake of a cylinder in a Mach 4.6 flow, using a data set derived from computational fluid dynamics simulations. The Hough transform is attractive for molecular tagging velocimetry applications since it is capable of discriminating spurious features that can have a biasing effect in the fitting process. Assessment of the precision and accuracy of the method were also performed to show the dependence on analysis window size and signal-to-noise levels. The accuracy of this Hough transform-based method to quantify intersection displacements was determined to be comparable to cross-correlation methods. The employed line parameterization avoids the assumption of linearity in the vicinity of each intersection, which is important in the limit of drastic grid deformations resulting from large velocity gradients common in high-speed flow applications. This Hough transform method has the potential to enable the direct and spatially accurate measurement of local vorticity, which is important in applications involving turbulent flowfields. Finally, two-component velocity determinations using the Hough transform from experimentally obtained images are presented, demonstrating the feasibility of the proposed analysis method.
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Yao, Mao-Sung
1990-01-01
A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.
MODTRAN4 radiative transfer modeling for atmospheric correction
NASA Astrophysics Data System (ADS)
Berk, Alexander; Anderson, Gail P.; Bernstein, Lawrence S.; Acharya, Prabhat K.; Dothe, H.; Matthew, Michael W.; Adler-Golden, Steven M.; Chetwynd, James H.; Richtsmeier, Steven C.; Pukall, Brian; Allred, Clark L.; Jeong, Laila S.; Hoke, Michael L.
1999-10-01
MODTRAN4, the latest publicly released version of MODTRAN, provides many new and important options for modeling atmospheric radiation transport. A correlated-k algorithm improves multiple scattering, eliminates Curtis-Godson averaging, and introduces Beer's Law dependencies into the band model. An optimized 15 cm(superscript -1) band model provides over a 10-fold increase in speed over the standard MODTRAN 1 cm(superscript -1) band model with comparable accuracy when higher spectral resolution results are unnecessary. The MODTRAN ground surface has been upgraded to include the effects of Bidirectional Reflectance Distribution Functions (BRDFs) and Adjacency. The BRDFs are entered using standard parameterizations and are coupled into line-of-sight surface radiance calculations.
Tropical Cloud Properties and Radiative Heating Profiles
Mather, James
2008-01-15
We have generated a suite of products that includes merged soundings, cloud microphysics, and radiative fluxes and heating profiles. The cloud microphysics is strongly based on the ARM Microbase value added product (Miller et al., 2003). We have made a few changes to the microbase parameterizations to address issues we observed in our initial analysis of the tropical data. The merged sounding product is not directly related to the product developed by ARM but is similar in that it uses the microwave radiometer to scale the radiosonde column water vapor. The radiative fluxes also differ from the ARM BBHRP (Broadband Heating Rate Profile) product in terms of the radiative transfer model and the sampling interval.
A High Resolution Study of Black Sea Circulation and Hypothetical Oil Spills
NASA Astrophysics Data System (ADS)
Dietrich, D. E.; Bowman, M. J.; Korotenko, K. A.
2008-12-01
A 1/24 deg resolution adaptation of the DieCAST ocean model simulates a realistically intense Rim Current and ubiquitous mesoscale coastal anticyclonic eddies that result from anticyclonic vorticity generation by laterally differential bottom drag forces that are amplified near Black Sea coastal headlands. Climatological and synoptic surface forcings are compared. The effects of vertical momentum transfer by known (by Synop region fishermen, as reported by Ballard National Geographic article) big amplitude internal waves are parameterized by big vertical viscosity. Sensitivity to vertical viscosity is shown. Results of simulated hypothetical oil spills are shown. A simple method to nowcast/forecast the Black Sea currents is described and early results are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.
2015-04-18
With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.
NASA Technical Reports Server (NTRS)
Kulawik, Susan S.; Worden, John; Eldering, Annmarie; Bowman, Kevin; Gunson, Michael; Osterman, Gregory B.; Zhang, Lin; Clough, Shepard A.; Shephard, Mark W.; Beer, Reinhard
2006-01-01
We develop an approach to estimate and characterize trace gas retrievals in the presence of clouds in high spectral measurements of upwelling radiance in the infrared spectral region (650-2260/cm). The radiance contribution of clouds is parameterized in terms of a set of frequency-dependent nonscattering optical depths and a cloud height. These cloud parameters are retrieved jointly with surface temperature, emissivity, atmospheric temperature, and trace gases such as ozone from spectral data. We demonstrate the application of this approach using data from the Tropospheric Emission Spectrometer (TES) and test data simulated with a scattering radiative transfer model. We show the value of this approach in that it results in accurate estimates of errors for trace gas retrievals, and the retrieved values improve over the initial guess for a wide range of cloud conditions. Comparisons are made between TES retrievals of ozone, temperature, and water to model fields from the Global Modeling and Assimilation Office (GMAO), temperature retrievals from the Atmospheric Infrared Sounder (AIRS), tropospheric ozone columns from the Goddard Earth Observing System (GEOS) GEOS-Chem, and ozone retrievals from the Total Ozone Mapping Spectrometer (TOMS). In each of these cases, this cloud retrieval approach does not introduce observable biases into TES retrievals.
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM
2011-01-18
Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust
NASA Astrophysics Data System (ADS)
Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.
2016-05-01
Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
USDA-ARS?s Scientific Manuscript database
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
NASA Astrophysics Data System (ADS)
Wong, J.; Barth, M. C.; Noone, D. C.
2012-12-01
Lightning-generated nitrogen oxides (LNOx) is an important precursor to tropospheric ozone production. With a meteorological time-scale variability similar to that of the ozone chemical lifetime, it can nonlinearly perturb tropospheric ozone concentration. Coupled with upper-air circulation patterns, LNOx can accumulate in significant amount in the upper troposphere with other precursors, thus enhancing ozone production (see attached figure). While LNOx emission has been included and tuned extensively in global climate models, its inclusions in regional chemistry models are seldom tested. Here we present a study that evaluates the frequently used Price and Rind parameterization based on cloud-top height at resolutions that partially resolve deep convection using the Weather Research and Forecasting model with Chemistry (WRF-Chem) over the contiguous United States. With minor modifications, the parameterization is shown to generate integrated flash counts close to those observed. However, the modeled frequency distribution of cloud-to-ground flashes do not represent well for storms with high flash rates, bringing into question the applicability of the intra-cloud/ground partitioning (IC:CG) formulation of Price and Rind in some studies. Resolution dependency also requires attention when sub-grid cloud-tops are used instead of the originally intended grid-averaged cloud-top. LNOx passive tracers being gathered by monsoonal upper tropospheric anticyclone.
Solar and chemical reaction-induced heating in the terrestrial mesosphere and lower thermosphere
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.
1992-01-01
Airglow and chemical processes in the terrestrial mesosphere and lower thermosphere are reviewed, and initial parameterizations of the processes applicable to multidimensional models are presented. The basic processes by which absorbed solar energy participates in middle atmosphere energetics for absorption events in which photolysis occurs are illustrated. An approach that permits the heating processes to be incorporated in numerical models is presented.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Search for subgrid scale parameterization by projection pursuit regression
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Moin, Parviz
1992-01-01
The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important insight into the relation between scales in turbulent flows.
Sensitivity of Spacebased Microwave Radiometer Observations to Ocean Surface Evaporation
NASA Technical Reports Server (NTRS)
Liu, Timothy W.; Li, Li
2000-01-01
Ocean surface evaporation and the latent heat it carries are the major components of the hydrologic and thermal forcing on the global oceans. However, there is practically no direct in situ measurements. Evaporation estimated from bulk parameterization methods depends on the quality and distribution of volunteer-ship reports which are far less than satisfactory. The only way to monitor evaporation with sufficient temporal and spatial resolutions to study global environment changes is by spaceborne sensors. The estimation of seasonal-to-interannual variation of ocean evaporation, using spacebased measurements of wind speed, sea surface temperature (SST), and integrated water vapor, through bulk parameterization method,s was achieved with reasonable success over most of the global ocean, in the past decade. Because all the three geophysical parameters can be retrieved from the radiance at the frequencies measured by the Scanning Multichannel Microwave Radiometer (SMMR) on Nimbus-7, the feasibility of retrieving evaporation directly from the measured radiance was suggested and demonstrated using coincident brightness temperatures observed by SMMR and latent heat flux computed from ship data, in the monthly time scale. However, the operational microwave radiometers that followed SMMR, the Special Sensor Microwave/Imager (SSM/I), lack the low frequency channels which are sensitive to SST. This low frequency channels are again included in the microwave imager (TMI) of the recently launched Tropical Rain Measuring Mission (TRMM). The radiance at the frequencies observed by both TMI and SSM/I were simulated through an atmospheric radiative transfer model using ocean surface parameters and atmospheric temperature and humidity profiles produced by the reanalysis of the European Center for Medium Range Weather Forecast (ECMWF). From the same ECMWF data set, coincident evaporation is computed using a surface layer turbulent transfer model. The sensitivity of the radiance to evaporation over various seasons and geographic locations are examined. The microwave frequencies with radiance that are significant correlated with evaporation are identify and capability of estimating evaporation directly from TMI will be discussed.
NASA Astrophysics Data System (ADS)
Doering, Ryan L.
2009-01-01
Determining Herbig Ae/Be star dust parameters provides constraints for planet formation theory, and yields information about the matter around intermediate-mass stars as they approach the main sequence. In this dissertation talk, I present the results of a multiwavelength imaging and radiative transfer modeling study of Herbig Ae/Be stars, and a near-infrared instrumentation project, with the aim of parameterizing the dust in these systems. The Hubble Space Telescope was used to search for optical light scattered by dust in a sample of young stars. This survey provided the first scattered-light image of the circumstellar environment around the Herbig Ae/Be star HD 97048. Structure is observed in the dust distribution similar to that seen in other Herbig Ae/Be systems. A ground-based near-infrared imaging study of Herbig Ae/Be candidates was also carried out. Photometry was collected for spectral energy distribution construction, and binary candidates were resolved. Detailed dust modeling of HD 97048 and HD 100546 was carried out with a two-component geometry consisting of a flared disk and an extended envelope. The models achieve a reasonable global fit to the spectral energy distributions, and produce images with the desired geometry. The disk midplane densities are found to go as r-0.5 and r-1.8, giving disk dust masses of 3.0 x 10-4 and 5.9 x 10-5 Msun for HD 97048 and HD 100546, respectively. A gas-to-dust mass ratio lower limit of 3.2 was calculated for HD 97048. Furthermore, I have participated in the development of the WIYN High Resolution Infrared Camera. The instrument operates in the near-infrared ( 0.8 - 2.5 microns), includes 13 filters, and has a pixel size of 0.1 arcsec, resulting in a field of view of 3 arcmin x 3 arcmin. An angular resolution of 0.25 arcsec is anticipated. I provide an overview of the instrument and report performance results.
Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir M; Helvie, Mark A; Richter, Caleb; Cha, Kenny
2018-05-01
Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p > 0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-05-01
Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p > 0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization
NASA Astrophysics Data System (ADS)
Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.
2016-12-01
Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.
NASA Astrophysics Data System (ADS)
Sommer, Philipp S.; Kaplan, Jed O.
2017-10-01
While a wide range of Earth system processes occur at daily and even subdaily timescales, many global vegetation and other terrestrial dynamics models historically used monthly meteorological forcing both to reduce computational demand and because global datasets were lacking. Recently, dynamic land surface modeling has moved towards resolving daily and subdaily processes, and global datasets containing daily and subdaily meteorology have become available. These meteorological datasets, however, cover only the instrumental era of the last approximately 120 years at best, are subject to considerable uncertainty, and represent extremely large data files with associated computational costs of data input/output and file transfer. For periods before the recent past or in the future, global meteorological forcing can be provided by climate model output, but the quality of these data at high temporal resolution is low, particularly for daily precipitation frequency and amount. Here, we present GWGEN, a globally applicable statistical weather generator for the temporal downscaling of monthly climatology to daily meteorology. Our weather generator is parameterized using a global meteorological database and simulates daily values of five common variables: minimum and maximum temperature, precipitation, cloud cover, and wind speed. GWGEN is lightweight, modular, and requires a minimal set of monthly mean variables as input. The weather generator may be used in a range of applications, for example, in global vegetation, crop, soil erosion, or hydrological models. While GWGEN does not currently perform spatially autocorrelated multi-point downscaling of daily weather, this additional functionality could be implemented in future versions.
Template-based procedures for neural network interpretation.
Alexander, J A.; Mozer, M C.
1999-04-01
Although neural networks often achieve impressive learning and generalization performance, their internal workings are typically all but impossible to decipher. This characteristic of the networks, their opacity, is one of the disadvantages of connectionism compared to more traditional, rule-oriented approaches to artificial intelligence. Without a thorough understanding of the network behavior, confidence in a system's results is lowered, and the transfer of learned knowledge to other processing systems - including humans - is precluded. Methods that address the opacity problem by casting network weights in symbolic terms are commonly referred to as rule extraction techniques. This work describes a principled approach to symbolic rule extraction from standard multilayer feedforward networks based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to the actual weights of a unit. Depending on the requirements of the application domain, the approach can accommodate n-ary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k(2)) complexity, or more general classes of recursive n-of-m expressions with O(k(L+2)) complexity, where k is the number of inputs to an unit and L the recursion level of the expression class. Compared to other approaches in the literature, our method of rule extraction offers benefits in simplicity, computational performance, and overall flexibility. Simulation results on a variety of problems demonstrate the application of our procedures as well as the strengths and the weaknesses of our general approach.
The EarthCARE Simulator (Invited)
NASA Astrophysics Data System (ADS)
Donovan, D. P.; van Zadellhoff, G.; Lajas, D.; Eisinger, M.; Franco, R.
2009-12-01
In recent years, the value of multisensor remote sensing techniques applied to cloud, aerosol, radiation and precipitation studies has become clear. For example, combinations of instruments including lidars and/or radars have proved very useful for profile retrievals of cloud macrophysical and microphysical properties. This is amply illustrated by various results from the ARM (and similar) sites as well as from results derived using the Cloudsat/CALIPSO/A-train combination of instruments. The Earth Clouds Aerosol and Radiation Explorer (EarthCARE) mission is a combined ESA/JAXA mission scheduled for launch in 2013 and has been designed with sensor-synergy playing a driving role in its scientific applications. The EarthCARE mission consists of a cloud profiling Doppler radar, a high-spectral-resolution lidar, a cloud/aerosol imager and a three-view broadband radiometer. As part of the mission development process, a detailed end-to-end multisensor simulation system has been developed. The EarthCARE Simulator (ECSIM) consists of a modular general framework populated by various models. The models within ECSIM are grouped according to the following scheme: 1) Scene creation models (3D atmospheric scene definition) 2) Orbit models (orbit and orientation of the platform as it overflies the scene) 3) Forward models (calculate the signal impinging on the telescope/antenna of the instrument(s) in question) 4) Instrument models (calculate the instrument response to the signals calculated by the Forward models) 5) Retrieval models (invert the instrument signals to recover relevant geophysical information) Within the default ECSIM models crude instrument specific parameterizations (i.e. empirically based Z vs IWC relationships) are avoided. Instead, the radiative transfer forward models are kept as separate as possible from the instrument models. In order to accomplish this, the atmospheric scenes are specified in high detail (i.e. bin resolved cloud size distribution are stored) and the relevant wavelength dependent optical properties are stored in a separate database. This helps insure that all the instruments involved in the simulation are treated in a consistent fashion and that the physical relationships between the various measurements are realistically captured (something that using instrument specific parameterizations relationships can not guarantee). As a consequence, ECSIM's modular structure makes it straightforward to add new instruments (thus expanding ECSIM beyond the EarthCARE instrument suite) and also makes ECSIM well-suited for physically based retrieval algorithm development. In this talk, we will introduce ECSIM and emphasize the philosophy behind its design. We will also give a brief overview on the various default models. Finally, we will present several examples of how ECSIM can and is being used for purposes ranging from general radiative transfer calculations to instrument performance estimation and synergistic algorithm development and characterization.
USDA-ARS?s Scientific Manuscript database
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Brayard, Philippe; Chouvenc, Pierre; Woinet, Bertrand
2013-02-01
This paper shows how to optimize the primary drying phase, for both product quality and drying time, of a parenteral formulation via design space. A non-steady state model, parameterized with experimentally determined heat and mass transfer coefficients, is used to define the design space when the heat transfer coefficient varies with the position of the vial in the array. The calculations recognize both equipment and product constraints, and also take into account model parameter uncertainty. Examples are given of cycles designed for the same formulation, but varying the freezing conditions and the freeze-dryer scale. These are then compared in terms of drying time. Furthermore, the impact of inter-vial variability on design space, and therefore on the optimized cycle, is addressed. With this regard, a simplified method is presented for the cycle design, which reduces the experimental effort required for the system qualification. The use of mathematical modeling is demonstrated to be very effective not only for cycle development, but also for solving problem of process transfer. This study showed that inter-vial variability remains significant when vials are loaded on plastic trays, and how inter-vial variability can be taken into account during process design.
Simultaneous Semi-Distributed Model Calibration Guided by ...
Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la
Parameterization of a mesoscopic model for the self-assembly of linear sodium alkyl sulfates
NASA Astrophysics Data System (ADS)
Mai, Zhaohuan; Couallier, Estelle; Rakib, Mohammed; Rousseau, Bernard
2014-05-01
A systematic approach to develop mesoscopic models for a series of linear anionic surfactants (CH3(CH2)n - 1OSO3Na, n = 6, 9, 12, 15) by dissipative particle dynamics (DPD) simulations is presented in this work. The four surfactants are represented by coarse-grained models composed of the same head group and different numbers of identical tail beads. The transferability of the DPD model over different surfactant systems is carefully checked by adjusting the repulsive interaction parameters and the rigidity of surfactant molecules, in order to reproduce key equilibrium properties of the aqueous micellar solutions observed experimentally, including critical micelle concentration (CMC) and average micelle aggregation number (Nag). We find that the chain length is a good index to optimize the parameters and evaluate the transferability of the DPD model. Our models qualitatively reproduce the essential properties of these surfactant analogues with a set of best-fit parameters. It is observed that the logarithm of the CMC value decreases linearly with the surfactant chain length, in agreement with Klevens' rule. With the best-fit and transferable set of parameters, we have been able to calculate the free energy contribution to micelle formation per methylene unit of -1.7 kJ/mol, very close to the experimentally reported value.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
Direct push driven in situ color logging tool (CLT): technique, analysis routines, and application
NASA Astrophysics Data System (ADS)
Werban, U.; Hausmann, J.; Dietrich, P.; Vienken, T.
2014-12-01
Direct push technologies have recently seen a broad development providing several tools for in situ parameterization of unconsolidated sediments. One of these techniques is the measurement of soil colors - a proxy information that reveals to soil/sediment properties. We introduce the direct push driven color logging tool (CLT) for real-time and depth-resolved investigation of soil colors within the visible spectrum. Until now, no routines exist on how to handle high-resolved (mm-scale) soil color data. To develop such a routine, we transform raw data (CIEXYZ) into soil color surrogates of selected color spaces (CIExyY, CIEL*a*b*, CIEL*c*h*, sRGB) and denoise small-scale natural variability by Haar and Daublet4 wavelet transformation, gathering interpretable color logs over depth. However, interpreting color log data as a single application remains challenging. Additional information, such as site-specific knowledge of the geological setting, is required to correlate soil color data to specific layers properties. Hence, we exemplary provide results from a joint interpretation of in situ-obtained soil color data and 'state-of-the-art' direct push based profiling tool data and discuss the benefit of additional data. The developed routine is capable of transferring the provided information obtained as colorimetric data into interpretable color surrogates. Soil color data proved to correlate with small-scale lithological/chemical changes (e.g., grain size, oxidative and reductive conditions), especially when combined with additional direct push vertical high resolution data (e.g., cone penetration testing and soil sampling). Thus, the technique allows enhanced profiling by means of providing another reproducible high-resolution parameter for analysis subsurface conditions. This opens potential new areas of application and new outputs for such data in site investigation. It is our intention to improve color measurements by means method of application and data interpretation, useful to characterize vadose layer/soil/sediment characteristics.
NASA Astrophysics Data System (ADS)
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip; Veidenbaum, Alex; Nicolau, Alex
2017-07-01
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18-0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. The spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere.The Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. We compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20-40 % depending on the solar zenith angles and occur throughout the atmosphere.Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components - wavelength integration, scattering, and averaging over cloud fields - all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18–0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. Themore » spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere. Furthermore, the Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. Here, we compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20–40 % depending on the solar zenith angles and occur throughout the atmosphere. Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components – wavelength integration, scattering, and averaging over cloud fields – all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.« less
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip; ...
2017-01-01
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18–0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. Themore » spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere. Furthermore, the Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. Here, we compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20–40 % depending on the solar zenith angles and occur throughout the atmosphere. Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components – wavelength integration, scattering, and averaging over cloud fields – all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.« less
NASA Technical Reports Server (NTRS)
Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.
2012-01-01
The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.
NASA Astrophysics Data System (ADS)
Fröhlich, K.; Schmidt, T.; Ern, M.; Preusse, P.; de La Torre, A.; Wickert, J.; Jacobi, Ch.
2007-12-01
Five years of global temperatures retrieved from radio occultations measured by Champ (Challenging Minisatellite Payload) and SAC-C (Satelite de Aplicaciones Cientificas-C) are analyzed for gravity waves (GWs). In order to separate GWs from other atmospheric variations, a high-pass filter was applied on the vertical profile. Resulting temperature fluctuations correspond to vertical wavelengths between 400 m (instrumental resolution) and 10 km (limit of the high-pass filter). The temperature fluctuations can be converted into GW potential energy, but for comparison with parameterization schemes GW momentum flux is required. We therefore used representative values for the vertical and horizontal wavelength to infer GW momentum flux from the GPS measurements. The vertical wavelength value is determined by high-pass filtering, the horizontal wavelength is adopted from a latitude-dependent climatology. The obtained momentum flux distributions agree well, both in global distribution and in absolute values, with simulations using the Warner and McIntyre parameterization (WM) scheme. However, discrepancies are found in the annual cycle. Online simulations, implementing the WM scheme in the mechanistic COMMA-LIM (Cologne Model of the Middle Atmosphere—Leipzig Institute for Meteorology) general circulation model (GCM), do not converge, demonstrating that a good representation of GWs in a GCM requires both a realistic launch distribution and an adequate representation of GW breaking and momentum transfer.
NASA Astrophysics Data System (ADS)
Asher, E.; Emmons, L. K.; Kinnison, D. E.; Tilmes, S.; Hills, A. J.; Hornbrook, R. S.; Stephens, B. B.; Apel, E. C.
2017-12-01
Surface albedo and precipitation over the Southern Ocean are sensitive to parameterizations of aerosol formation and cloud dynamics in global climate models. Observations of precursor gases for natural aerosols can help constrain the uncertainty in these parameterizations, if used in conjunction with an appropriately simplified chemical mechanism. We implement current oceanic "bottom-up" emission climatologies of dimethyl sulfide (DMS) and isoprene in CESM2.0 (Lana et al. 2016; Archer et al. 2009) and compare modeled constituents from two separate chemical mechanisms with data obtained from the Trace Organic Gas Analyzer (TOGA) on the O2/N2 Ratios and CO2 Airborne Study in the Southern Ocean (ORCAS) and the Atmospheric Tomography Mission 2 (ATom-2). We use ORCAS measurements of DMS, isoprene, methyl vinyl ketone (MVK) and methacrolein (MACR) from over 10 flights in Jan. - Feb. 2016 as a training dataset to improve "bottom-up" emissions. Thereafter, we evaluate the scaled "top-down" emissions in CESM with TOGA data obtained from the Atmospheric Tomography Mission (ATom-2) in Feb. 2017. Recent laboratory studies at NCAR confirm that TOGA surpasses proton transfer reaction mass spectrometry (PTR-MS) and commercial gas chromatography (GC) instruments with respect to accurate measurements of oxygenated VOCs in low nitrogen oxide (NO) environments, such as MVK and MACR.
NASA Technical Reports Server (NTRS)
Curry, Judith; Khvorostyanov, V. I.
2005-01-01
This project used a hierarchy of cloud resolving models to address the following science issues of relevance to CRYSTAL-FACE: What ice crystal nucleation mechanisms are active in the different types of cirrus clouds in the Florida area and how do these different nucleation processes influence the evolution of the cloud system and the upper tropospheric humidity? How does the feedback between supersaturation and nucleation impact the evolution of the cloud? What is the relative importance of the large-scale vertical motion and the turbulent motions in the evolution of the crystal size spectra? How does the size spectra impact the life-cycle of the cloud, stratospheric dehydration, and cloud radiative forcing? What is the nature of the turbulence and waves in the upper troposphere generated by precipitating deep convective cloud systems? How do cirrus microphysical and optical properties vary with the small-scale dynamics? How do turbulence and waves in the upper troposphere influence the cross-tropopause mixing and stratospheric and upper tropospheric humidity? The models used in this study were: 2-D hydrostatic model with explicit microphysics that can account for 30 size bins for both the droplet and crystal size spectra. Notably, a new ice crystal nucleation scheme has been incorporated into the model. Parcel model with explicit microphysics, for developing and evaluating microphysical parameterizations. Single column model for testing bulk microphysics parameterizations
47 CFR 101.55 - Considerations involving transfer or assignment applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (d) If a proposed transfer of radio facilities is incidental to a sale or other facilities or merger... SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Applications and Licenses License Transfers, Modifications, Conditions and Forfeitures § 101.55 Considerations involving transfer or assignment applications. (a) Except...
47 CFR 101.55 - Considerations involving transfer or assignment applications.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (d) If a proposed transfer of radio facilities is incidental to a sale or other facilities or merger... SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Applications and Licenses License Transfers, Modifications, Conditions and Forfeitures § 101.55 Considerations involving transfer or assignment applications. (a) Except...
47 CFR 101.55 - Considerations involving transfer or assignment applications.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (d) If a proposed transfer of radio facilities is incidental to a sale or other facilities or merger... SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Applications and Licenses License Transfers, Modifications, Conditions and Forfeitures § 101.55 Considerations involving transfer or assignment applications. (a) Except...
47 CFR 101.55 - Considerations involving transfer or assignment applications.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (d) If a proposed transfer of radio facilities is incidental to a sale or other facilities or merger... SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Applications and Licenses License Transfers, Modifications, Conditions and Forfeitures § 101.55 Considerations involving transfer or assignment applications. (a) Except...
47 CFR 101.55 - Considerations involving transfer or assignment applications.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (d) If a proposed transfer of radio facilities is incidental to a sale or other facilities or merger... SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Applications and Licenses License Transfers, Modifications, Conditions and Forfeitures § 101.55 Considerations involving transfer or assignment applications. (a) Except...
Ideas for the rapid development of the structural models in mechanical engineering
NASA Astrophysics Data System (ADS)
Oanta, E.; Raicu, A.; Panait, C.
2017-08-01
Conceiving computer based instruments is a long run concern of the authors. Some of the original solutions are: optimal processing of the large matrices, interfaces between the programming languages, approximation theory using spline functions, numerical programming increased accuracy based on the extended arbitrary precision libraries. For the rapid development of the models we identified the following directions: atomization, ‘librarization’, parameterization, automatization and integration. Each of these directions has some particular aspects if we approach mechanical design problems or software development. Atomization means a thorough top-down decomposition analysis which offers an insight regarding the basic features of the phenomenon. Creation of libraries of reusable mechanical parts and libraries of programs (data types, functions) save time, cost and effort when a new model must be conceived. Parameterization leads to flexible definition of the mechanical parts, the values of the parameters being changed either using a dimensioning program or in accord to other parts belonging to the same assembly. The resulting templates may be also included in libraries. Original software applications are useful for the model’s input data generation, to input the data into CAD/FEA commercial applications and for the data integration of the various types of studies included in the same project.
Efficient use of mobile devices for quantification of pressure injury images.
Garcia-Zapirain, Begonya; Sierra-Sosa, Daniel; Ortiz, David; Isaza-Monsalve, Mariano; Elmaghraby, Adel
2018-01-01
Pressure Injuries are chronic wounds that are formed due to the constriction of the soft tissues against bone prominences. In order to assess these injuries, the medical personnel carry out the evaluation and diagnosis using visual methods and manual measurements, which can be inaccurate and may generate discomfort in the patients. By using segmentation techniques, the Pressure Injuries can be extracted from an image and accurately parameterized, leading to a correct diagnosis. In general, these techniques are based on the solution of differential equations and the involved numerical methods are demanding in terms of computational resources. In previous work, we proposed a technique developed using toroidal parametric equations for image decomposition and segmentation without solving differential equations. In this paper, we present the development of a mobile application useful for the non-contact assessment of Pressure Injuries based on the toroidal decomposition from images. The usage of this technique allows us to achieve an accurate segmentation almost 8 times faster than Active Contours without Edges (ACWE) and Dynamic Contours methods. We describe the techniques and the implementation for Android devices using Python and Kivy. This application allows for the segmentation and parameterization of injuries, obtain relevant information for the diagnosis and tracking the evolution of patient's injuries.
NASA Astrophysics Data System (ADS)
Guendehou, G. H. S.; Liski, J.; Tuomi, M.; Moudachirou, M.; Sinsin, B.; Mäkipää, R.
2013-05-01
We evaluated the applicability of the dynamic soil carbon model Yasso07 in tropical conditions in West Africa by simulating the litter decomposition process using as required input into the model litter mass, litter quality, temperature and precipitation collected during a litterbag experiment. The experiment was conducted over a six-month period on leaf litter of five dominant tree species, namely Afzelia africana, Anogeissus leiocarpa, Ceiba pentandra, Dialium guineense and Diospyros mespiliformis in a semi-deciduous vertisol forest in Southern Benin. Since the predictions of Yasso07 were not consistent with the observations on mass loss and chemical composition of litter, Yasso07 was fitted to the dataset composed of global data and the new experimental data from Benin. The re-parameterized versions of Yasso07 had a good predictive ability and refined the applicability of the model in Benin to estimate soil carbon stocks, its changes and CO2 emissions from heterotrophic respiration as main outputs of the model. The findings of this research support the hypothesis that the high variation of litter quality observed in the tropics is a major driver of the decomposition and needs to be accounted in the model parameterization.
García-Betances, Rebeca I.; Cabrera-Umpiérrez, María Fernanda; Ottaviano, Manuel; Pastorino, Matteo; Arredondo, María T.
2016-01-01
Despite the speedy evolution of Information and Computer Technology (ICT), and the growing recognition of the importance of the concept of universal design in all domains of daily living, mainstream ICT-based product designers and developers still work without any truly structured tools, guidance or support to effectively adapt their products and services to users’ real needs. This paper presents the approach used to define and evaluate parametric cognitive models that describe interaction and usage of ICT by people with aging- and disability-derived functional impairments. A multisensorial training platform was used to train, based on real user measurements in real conditions, the virtual parameterized user models that act as subjects of the test-bed during all stages of simulated disabilities-friendly ICT-based products design. An analytical study was carried out to identify the relevant cognitive functions involved, together with their corresponding parameters as related to aging- and disability-derived functional impairments. Evaluation of the final cognitive virtual user models in a real application has confirmed that the use of these models produce concrete valuable benefits to the design and testing process of accessible ICT-based applications and services. Parameterization of cognitive virtual user models allows incorporating cognitive and perceptual aspects during the design process. PMID:26907296
Toward seamless hydrologic predictions across spatial scales
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine
2017-09-01
Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.
Structure and Dynamics of the Quasi-Biennial Oscillation in MERRA-2.
Coy, Lawrence; Wargan, Krzysztof; Molod, Andrea M; McCarty, William R; Pawson, Steven
2016-07-01
The structure, dynamics, and ozone signal of the Quasi-Biennial Oscillation produced by the 35-year NASA MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications) reanalysis are examined based on monthly mean output. Along with the analysis of the QBO in assimilation winds and ozone, the QBO forcings created by assimilated observations, dynamics, parameterized gravity wave drag, and ozone chemistry parameterization are examined and compared with the original MERRA system. Results show that the MERRA-2 reanalysis produces a realistic QBO in the zonal winds, mean meridional circulation, and ozone over the 1980-2015 time period. In particular, the MERRA-2 zonal winds show improved representation of the QBO 50 hPa westerly phase amplitude at Singapore when compared to MERRA. The use of limb ozone observations creates improved vertical structure and realistic downward propagation of the ozone QBO signal during times when the MLS ozone limb observations are available (October 2004 to present). The increased equatorial GWD in MERRA-2 has reduced the zonal wind data analysis contribution compared to MERRA so that the QBO mean meridional circulation can be expected to be more physically forced and therefore more physically consistent. This can be important for applications in which MERRA-2 winds are used to drive transport experiments.
Structure and Dynamics of the Quasi-Biennial Oscillation in MERRA-2
Coy, Lawrence; Wargan, Krzysztof; Molod, Andrea M.; McCarty, William R.; Pawson, Steven
2018-01-01
The structure, dynamics, and ozone signal of the Quasi-Biennial Oscillation produced by the 35-year NASA MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications) reanalysis are examined based on monthly mean output. Along with the analysis of the QBO in assimilation winds and ozone, the QBO forcings created by assimilated observations, dynamics, parameterized gravity wave drag, and ozone chemistry parameterization are examined and compared with the original MERRA system. Results show that the MERRA-2 reanalysis produces a realistic QBO in the zonal winds, mean meridional circulation, and ozone over the 1980–2015 time period. In particular, the MERRA-2 zonal winds show improved representation of the QBO 50 hPa westerly phase amplitude at Singapore when compared to MERRA. The use of limb ozone observations creates improved vertical structure and realistic downward propagation of the ozone QBO signal during times when the MLS ozone limb observations are available (October 2004 to present). The increased equatorial GWD in MERRA-2 has reduced the zonal wind data analysis contribution compared to MERRA so that the QBO mean meridional circulation can be expected to be more physically forced and therefore more physically consistent. This can be important for applications in which MERRA-2 winds are used to drive transport experiments. PMID:29551854
Vapor-liquid phase equilibria of water modelled by a Kim-Gordon potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maerzke, K A; McGrath, M J; Kuo, I W
2009-03-16
Gibbs ensemble Monte Carlo simulations were carried out to investigate the properties of a frozen-electron-density (or Kim-Gordon, KG) model of water along the vapor-liquid coexistence curve. Because of its theoretical basis, such a KG model provides for seamless coupling to Kohn-Sham density functional theory for use in mixed quantum mechanics/molecular mechanics (QM/MM) implementations. The Gibbs ensemble simulations indicate rather limited transferability of such a simple KG model to other state points. Specifically, a KG model that was parameterized by Barker and Sprik to the properties of liquid water at 300 K, yields saturated vapor pressures and a critical temperature thatmore » are significantly under- and over-estimated, respectively.« less
Radiation Losses Due to Tapering of a Double-Core Optical Waveguide
NASA Technical Reports Server (NTRS)
Lyons, Donald R.; Khet, Myat; Pencil, Eric (Technical Monitor)
2001-01-01
The theoretical model we designed parameterizes the power losses as a function of .the profile shape for a tapered, single mode, optical dielectric coupler. The focus of this project is to produce a working model that determines the power losses experienced by the fibers when light crosses a taper region. This phenomenon can be examined using coupled mode theory. The optical directional coupler consists of a parallel, dual-channel, waveguide with minimal spacing between the channels to permit energy exchange. Thus, power transfer is essentially a function of the taper profile. To find the fields in the fibers, the approach used was that of solving the Helmholtz equation in cylindrical coordinates involving Bessel and modified Bessel functions depending on the location.
Constraints on interacting dark energy models from Planck 2015 and redshift-space distortion data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, André A.; Abdalla, E.; Xu, Xiao-Dong
2017-01-01
We investigate phenomenological interactions between dark matter and dark energy and constrain these models by employing the most recent cosmological data including the cosmic microwave background radiation anisotropies from Planck 2015, Type Ia supernovae, baryon acoustic oscillations, the Hubble constant and redshift-space distortions. We find that the interaction in the dark sector parameterized as an energy transfer from dark matter to dark energy is strongly suppressed by the whole updated cosmological data. On the other hand, an interaction between dark sectors with the energy flow from dark energy to dark matter is proved in better agreement with the available cosmologicalmore » observations. This coupling between dark sectors is needed to alleviate the coincidence problem.« less
On factors influencing air-water gas exchange in emergent wetlands
Ho, David T.; Engel, Victor C.; Ferron, Sara; Hickman, Benjamin; Choi, Jay; Harvey, Judson W.
2018-01-01
Knowledge of gas exchange in wetlands is important in order to determine fluxes of climatically and biogeochemically important trace gases and to conduct mass balances for metabolism studies. Very few studies have been conducted to quantify gas transfer velocities in wetlands, and many wind speed/gas exchange parameterizations used in oceanographic or limnological settings are inappropriate under conditions found in wetlands. Here six measurements of gas transfer velocities are made with SF6 tracer release experiments in three different years in the Everglades, a subtropical peatland with surface water flowing through emergent vegetation. The experiments were conducted under different flow conditions and with different amounts of emergent vegetation to determine the influence of wind, rain, water flow, waterside thermal convection, and vegetation on air-water gas exchange in wetlands. Measured gas transfer velocities under the different conditions ranged from 1.1 cm h−1 during baseline conditions to 3.2 cm h−1 when rain and water flow rates were high. Commonly used wind speed/gas exchange relationships would overestimate the gas transfer velocity by a factor of 1.2 to 6.8. Gas exchange due to thermal convection was relatively constant and accounted for 14 to 51% of the total measured gas exchange. Differences in rain and water flow among the different years were responsible for the variability in gas exchange, with flow accounting for 37 to 77% of the gas exchange, and rain responsible for up to 40%.
Balaskó, M; Korösi, F; Szalay, Zs
2004-10-01
A semi-simultaneous application of neutron and X-ray radiography (NR, XR) respectively, was applied to an Al casting. The experiments were performed at the 10MW VVR-SM research reactor in Budapest (Hungary). The aim was to reveal, identify and parameterize the hidden defects in the Al casting. The joint application of NR and XR revealed hidden defects located in the Al casting. Image analysis of the NR and XR images unveiled a cone-like dimensionality of the defects. The spectral density analysis of the images showed a distinctly different character for the hidden defect region of Al casting in comparison with that of the defect-free one.
Modeling Cloud Phase Fraction Based on In-situ Observations in Stratiform Clouds
NASA Astrophysics Data System (ADS)
Boudala, F. S.; Isaac, G. A.
2005-12-01
Mixed-phase clouds influence weather and climate in several ways. Due to the fact that they exhibit very different optical properties as compared to ice or liquid only clouds, they play an important role in the earth's radiation balance by modifying the optical properties of clouds. Precipitation development in clouds is also enhanced under mixed-phase conditions and these clouds may contain large supercooled drops that freeze quickly in contact with aircraft surfaces that may be a hazard to aviation. The existence of ice and liquid phase clouds together in the same environment is thermodynamically unstable, and thus they are expected to disappear quickly. However, several observations show that mixed-phase clouds are relatively stable in the natural environment and last for several hours. Although there have been some efforts being made in the past to study the microphysical properties of mixed-phase clouds, there are still a number of uncertainties in modeling these clouds particularly in large scale numerical models. In most models, very simple temperature dependent parameterizations of cloud phase fraction are being used to estimate the fraction of ice or liquid phase in a given mixed-phase cloud. In this talk, two different parameterizations of ice fraction using in-situ aircraft measurements of cloud microphysical properties collected in extratropical stratiform clouds during several field programs will be presented. One of the parameterizations has been tested using a single prognostic equation developed by Tremblay et al. (1996) for application in the Canadian regional weather prediction model. The addition of small ice particles significantly increased the vapor deposition rate when the natural atmosphere is assumed to be water saturated, and thus this enhanced the glaciation of simulated mixed-phase cloud via the Bergeron-Findeisen process without significantly affecting the other cloud microphysical processes such as riming and particle sedimentation rates. After the water vapor pressure in mixed-phase cloud was modified based on the Lord et al. (1984) scheme by weighting the saturation water vapor pressure with ice fraction, it was possible to simulate more stable mixed-phase cloud. It was also noted that the ice particle concentration (L>100 μm) in mixed-phase cloud is lower on average by a factor 3 and as a result the parameterization should be corrected for this effect. After accounting for this effect, the parameterized ice fraction agreed well with observed mean ice fraction.
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
NASA Astrophysics Data System (ADS)
Riddick, S. N.; Ward, D. S.; Hess, P.; Mahowald, N.; Massad, R. S.; Holland, E. A.
2015-09-01
Nitrogen applied to the surface of the land for agricultural purposes represents a significant source of reactive nitrogen (Nr) that can be emitted as a gaseous Nr species, be denitrified to atmospheric nitrogen (N2), run-off during rain events or form plant useable nitrogen in the soil. To investigate the magnitude, temporal variability and spatial heterogeneity of nitrogen pathways on a global scale from sources of animal manure and synthetic fertilizer, we developed a mechanistic parameterization of these pathways within a global terrestrial model. The parameterization uses a climate dependent approach whereby the relationships between meteorological variables and biogeochemical processes are used to calculate the volatilization of ammonia (NH3), nitrification and run-off of Nr following manure or fertilizer application. For the year 2000, we estimate global NH3 emission and Nr dissolved during rain events from manure at 21 and 11 Tg N yr-1, respectively; for synthetic fertilizer we estimate the NH3 emission and Nr run-off during rain events at 12 and 5 Tg N yr-1, respectively. The parameterization was implemented in the Community Land Model from 1850 to 2000 using a transient simulation which predicted that, even though absolute values of all nitrogen pathways are increasing with increased manure and synthetic fertilizer application, partitioning of nitrogen to NH3 emissions from manure is increasing on a percentage basis, from 14 % of nitrogen applied (3 Tg NH3 yr-1) in 1850 to 18 % of nitrogen applied in 2000 (22 Tg NH3 yr-1). While the model confirms earlier estimates of nitrogen fluxes made in a range of studies, its key purpose is to provide a theoretical framework that can be employed within a biogeochemical model, that can explicitly respond to climate and that can evolve and improve with further observation.
NASA Technical Reports Server (NTRS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Pollock, Craig J.
2015-01-01
The most common instrument for low energy plasmas consists of a top-hat electrostatic analyzer geometry coupled with a microchannel-plate (MCP)-based detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Furthermore, due to finite resources, for large sensor suites such as the Fast Plasma Investigation (FPI) on NASA's Magnetospheric Multiscale (MMS) mission, calibration data are increasingly sparse. Measurements must be interpolated and extrapolated to understand instrument behavior for untestable operating modes and yet sensor inter-calibration is critical to mission success. To characterize instruments from a minimal set of parameters we have developed the first comprehensive mathematical description of both sensor electrostatic optics and particle detection systems. We include effects of MCP efficiency, gain, scattering, capacitive crosstalk, and charge cloud spreading at the detector output. Our parameterization enables the interpolation and extrapolation of instrument response to all relevant particle energies, detector high voltage settings, and polar angles from a small set of calibration data. We apply this model to the 32 sensor heads in the Dual Electron Sensor (DES) and 32 sensor heads in the Dual Ion Sensor (DIS) instruments on the 4 MMS observatories and use least squares fitting of calibration data to extract all key instrument parameters. Parameters that will evolve in flight, namely MCP gain, will be determined daily through application of this model to specifically tailored in-flight calibration activities, providing a robust characterization of sensor suite performance throughout mission lifetime. Beyond FPI, our model provides a valuable framework for the simulation and evaluation of future detection system designs and can be used to maximize instrument understanding with minimal calibration resources.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
NASA Astrophysics Data System (ADS)
Davidson, Eric A.; Verchot, Louis V.
2000-12-01
Because several soil properties and processes affect emissions of nitric oxide (NO) and nitrous oxide (N2O) from soils, it has been difficult to develop effective and robust algorithms to predict emissions of these gases in biogeochemical models. The conceptual "hole-in-the-pipe" (HIP) model has been used effectively to interpret results of numerous studies, but the ranges of climatic conditions and soil properties are often relatively narrow for each individual study. The Trace Gas Network (TRAGNET) database offers a unique opportunity to test the validity of one manifestation of the HIP model across a broad range of sites, including temperate and tropical climates, grasslands and forests, and native vegetation and agricultural crops. The logarithm of the sum of NO + N2O emissions was positively and significantly correlated with the logarithm of the sum of extractable soil NH4+ + NO3-. The logarithm of the ratio of NO:N2O emissions was negatively and significantly correlated with water-filled pore space (WFPS). These analyses confirm the applicability of the HIP model concept, that indices of soil N availability correlate with the sum of NO+N2O emissions, while soil water content is a strong and robust controller of the ratio of NO:N2O emissions. However, these parameterizations have only broad-brush accuracy because of unaccounted variation among studies in the soil depths where gas production occurs, where soil N and water are measured, and other factors. Although accurate predictions at individual sites may still require site-specific parameterization of these empirical functions, the parameterizations presented here, particularly the one for WFPS, may be appropriate for global biogeochemical modeling. Moreover, this integration of data sets demonstrates the broad ranging applicability of the HIP conceptual approach for understanding soil emissions of NO and N2O.
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
Predictive Compensator Optimization for Head Tracking Lag in Virtual Environments
NASA Technical Reports Server (NTRS)
Adelstein, Barnard D.; Jung, Jae Y.; Ellis, Stephen R.
2001-01-01
We examined the perceptual impact of plant noise parameterization for Kalman Filter predictive compensation of time delays intrinsic to head tracked virtual environments (VEs). Subjects were tested in their ability to discriminate between the VE system's minimum latency and conditions in which artificially added latency was then predictively compensated back to the system minimum. Two head tracking predictors were parameterized off-line according to cost functions that minimized prediction errors in (1) rotation, and (2) rotation projected into translational displacement with emphasis on higher frequency human operator noise. These predictors were compared with a parameterization obtained from the VE literature for cost function (1). Results from 12 subjects showed that both parameterization type and amount of compensated latency affected discrimination. Analysis of the head motion used in the parameterizations and the subsequent discriminability results suggest that higher frequency predictor artifacts are contributory cues for discriminating the presence of predictive compensation.
A bio-optical model for integration into ecosystem models for the Ligurian Sea
NASA Astrophysics Data System (ADS)
Bengil, Fethi; McKee, David; Beşiktepe, Sükrü T.; Sanjuan Calzado, Violeta; Trees, Charles
2016-12-01
A bio-optical model has been developed for the Ligurian Sea which encompasses both deep, oceanic Case 1 waters and shallow, coastal Case 2 waters. The model builds on earlier Case 1 models for the region and uses field data collected on the BP09 research cruise to establish new relationships for non-biogenic particles and CDOM. The bio-optical model reproduces in situ IOPs accurately and is used to parameterize radiative transfer simulations which demonstrate its utility for modeling underwater light levels and above surface remote sensing reflectance. Prediction of euphotic depth is found to be accurate to within ∼3.2 m (RMSE). Previously published light field models work well for deep oceanic parts of the Ligurian Sea that fit the Case 1 classification. However, they are found to significantly over-estimate euphotic depth in optically complex coastal waters where the influence of non-biogenic materials is strongest. For these coastal waters, the combination of the bio-optical model proposed here and full radiative transfer simulations provides significantly more accurate predictions of euphotic depth.
Precision analysis of the photomultiplier response to ultra low signals
NASA Astrophysics Data System (ADS)
Degtiarenko, Pavel
2017-11-01
A new computational model for the description of the photon detector response functions measured in conditions of low light is presented, together with examples of the observed photomultiplier signal amplitude distributions, successfully described using the parameterized model equation. In extension to the previously known approximations, the new model describes the underlying discrete statistical behavior of the photoelectron cascade multiplication processes in photon detectors with complex non-uniform gain structure of the first dynode. Important features of the model include the ability to represent the true single-photoelectron spectra from different photomultipliers with a variety of parameterized shapes, reflecting the variability in the design and in the individual parameters of the detectors. The new software tool is available for evaluation of the detectors' performance, response, and efficiency parameters that may be used in various applications including the ultra low background experiments such as the searches for Dark Matter and rare decays, underground neutrino studies, optimizing operations of the Cherenkov light detectors, help in the detector selection procedures, and in the experiment simulations.
Cosmological applications of Padé approximant
NASA Astrophysics Data System (ADS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
NASA Astrophysics Data System (ADS)
Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2015-03-01
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs
NASA Astrophysics Data System (ADS)
Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes
We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chojnicki, Kirsten; Cooper, Marcia A.; Guo, Shuyue
Pore-scale aperture effects on flow in pore networks was studied in the laboratory to provide a parameterization for use in transport models. Four cases were considered: regular and irregular pillar/pore alignment with and without an aperture. The velocity field of each case was measured and simulated, providing quantitatively comparable results. Two aperture effect parameterizations were considered: permeability and transmission. Permeability values varied by an order of magnitude between the cases with and without apertures. However, transmission did not correlate with permeability. Despite having much greater permeability the regular aperture case permitted less transmission than the regular case. Moreover, both irregularmore » cases had greater transmission than the regular cases, a difference not supported by the permeabilities. Overall, these findings suggest that pore-scale aperture effects on flow though a pore-network may not be adequately captured by properties such as permeability for applications that are interested in determining particle transport volume and timing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Energy considerations in the Community Atmosphere Model (CAM)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile; ...
2015-06-30
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
A coarse grain model for protein-surface interactions
NASA Astrophysics Data System (ADS)
Wei, Shuai; Knotts, Thomas A.
2013-09-01
The interaction of proteins with surfaces is important in numerous applications in many fields—such as biotechnology, proteomics, sensors, and medicine—but fundamental understanding of how protein stability and structure are affected by surfaces remains incomplete. Over the last several years, molecular simulation using coarse grain models has yielded significant insights, but the formalisms used to represent the surface interactions have been rudimentary. We present a new model for protein surface interactions that incorporates the chemical specificity of both the surface and the residues comprising the protein in the context of a one-bead-per-residue, coarse grain approach that maintains computational efficiency. The model is parameterized against experimental adsorption energies for multiple model peptides on different types of surfaces. The validity of the model is established by its ability to quantitatively and qualitatively predict the free energy of adsorption and structural changes for multiple biologically-relevant proteins on different surfaces. The validation, done with proteins not used in parameterization, shows that the model produces remarkable agreement between simulation and experiment.
Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.
1992-01-01
Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Observational and Modeling Studies of Clouds and the Hydrological Cycle
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1997-01-01
Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.
NASA Astrophysics Data System (ADS)
Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia
2018-06-01
Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
NASA Astrophysics Data System (ADS)
Schwartz, M. Christian
2017-08-01
This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the comparison between the two is nevertheless more favorable. It is concluded that the older datasets continue to be useful for scientific purposes, with certain caveats, and that continuing field investigations of cirrus with more modern probes is desirable.
NASA Astrophysics Data System (ADS)
Khan, Tanvir R.; Perlinger, Judith A.
2017-10-01
Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.
Bagstad, Kenneth J.; Semmens, Darius; Winthrop, Rob; Jaworksi, Delilah; Larson, Joel
2012-01-01
This report details the findings of the Bureau of Land Management–U.S. Geological Survey Ecosystem Services Valuation Pilot Study. This project evaluated alternative methods and tools that quantify and value ecosystem services, and it assessed the tools’ readiness for use in the Bureau of Land Management decisionmaking process. We tested these tools on the San Pedro River watershed in northern Sonora, Mexico, and southeast Arizona. The study area includes the San Pedro Riparian National Conservation Area (managed by the Bureau of Land Management), which has been a focal point for conservation activities and scientific research in recent decades. We applied past site-specific primary valuation studies, value transfer, the Wildlife Habitat Benefits Estimation Toolkit, and the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) and Artificial Intelligence for Ecosystem Services (ARIES) models to value locally important ecosystem services for the San Pedro River watershed—water, carbon, biodiversity, and cultural values. We tested these approaches on a series of scenarios to evaluate ecosystem service changes and the ability of the tools to accommodate scenarios. A suite of additional tools were either at too early a stage of development to run, were proprietary, or were place-specific tools inappropriate for application to the San Pedro River watershed. We described the strengths and weaknesses of these additional ecosystem service tools against a series of evaluative criteria related to their usefulness for Bureau of Land Management decisionmaking. Using these tools, we quantified gains or losses of ecosystem services under three categories of scenarios: urban growth, mesquite management, and water augmentation. These results quantify tradeoffs and could be useful for decisionmaking within Bureau of Land Management district or field offices. Results are accompanied by a relatively high level of uncertainty associated with model outputs, valuation methods, and discount rates applied. Further guidance on representing uncertainty and applying uncertain results in decisionmaking would benefit both tool developers and those offices in using ecosystem services to compare management tradeoffs. Decisionmakers and Bureau of Land Management managers at the State-, district-, and field-office level would also benefit from continuing model improvements, training, and guidance on tool use that can be provided by the U.S. Geological Survey, the Bureau of Land Management, and the Department of the Interior. Tradeoffs were identified in the level of effort needed to parameterize and run tools and the amount and quality of information they provide to the decision process. We found the Wildlife Habitat Benefits Estimation Toolkit, Ecosystem Services Review, and United Nations Environment Programme–World Conservation Monitoring Centre Ecosystem Services Toolkit to be immediately feasible for application by the Bureau of Land Management, given proper guidance on their use. It is also feasible for the Bureau of Land Management to use the InVEST model, but in early 2012 the process of parameterizing the model required resources and expertise that are unlikely to be available in most Bureau of Land Management district or field offices. Application of past primary valuation is feasible, but developing new primary-valuation studies is too time consuming for regular application. Value transfer approaches (aside from the Wildlife Habitat Benefits Estimation Toolkit) are best applied carefully on the basis of guidelines described in this report, to reduce transfer error. The ARIES model can provide useful information in regions modeled in the past (Arizona, California, Colorado, and Washington), but it lacks some features that will improve its usability, such as a generalized model that could be applied anywhere in the United States. Eleven other tools described in this report could become useful as the tools more fully develop, in high-profile cases for which additional resources are available for tool application or in case-study regions where place-specific models have already been developed. To improve the value of these tools in decisionmaking, we suggest scientific needs that agencies such as U.S. Geological Survey can help meet—for instance, development and support of data archives. Such archives could greatly reduce resource needs and improve the reliability and consistency of results. Given the rapid state of evolution in the field, periodic follow-up studies on ecosystem services tools would help to ensure that the Bureau of Land Management and other public land management agencies are kept up to date on new tools and features that bring ecosystem services closer to readiness for use in regular decisionmaking.
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
NASA Astrophysics Data System (ADS)
Paukert, M.; Hoose, C.; Simmel, M.
2017-03-01
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. In contrast, the immersion freezing of larger drops—"rain"—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. Here we introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation in raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.
NASA Astrophysics Data System (ADS)
Niu, Hailin; Zhang, Xiaotong; Liu, Qiang; Feng, Youbin; Li, Xiuhong; Zhang, Jialin; Cai, Erli
2015-12-01
The ocean surface albedo (OSA) is a deciding factor on ocean net surface shortwave radiation (ONSSR) estimation. Several OSA schemes have been proposed successively, but there is not a conclusion for the best OSA scheme of estimating the ONSSR. On the base of analyzing currently existing OSA parameterization, including Briegleb et al.(B), Taylor et al.(T), Hansen et al.(H), Jin et al.(J), Preisendorfer and Mobley(PM86), Feng's scheme(F), this study discusses the difference of OSA's impact on ONSSR estimation in condition of actual downward shortwave radiation(DSR). Then we discussed the necessity and applicability for the climate models to integrate the more complicated OSA scheme. It is concluded that the SZA and the wind speed are the two most significant effect factor to broadband OSA, thus the different OSA parameterizations varies violently in the regions of both high latitudes and strong winds. The OSA schemes can lead the ONSSR results difference of the order of 20 w m-2. The Taylor's scheme shows the best estimate, and Feng's result just following Taylor's. However, the accuracy of the estimated instantaneous OSA changes at different local time. Jin's scheme has the best performance generally at noon and in the afternoon, and PM86's is the best of all in the morning, which indicate that the more complicated OSA schemes reflect the temporal variation of OWA better than the simple ones.