Sample records for implicit model parameterized

  1. Normalized Implicit Radial Models for Scattered Point Cloud Data without Normal Vectors

    DTIC Science & Technology

    2009-03-23

    points by shrinking a discrete membrane, Computer Graphics Forum, Vol. 24-4, 2005, pp. 791-808 [8] Floater , M. S., Reimers, M.: Meshless...Parameterization and Surface Reconstruction, Computer Aided Geometric Design 18, 2001, pp 77-92 [9] Floater , M. S.: Parameterization of Triangulations and...Unorganized Points, In: Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak and M. S. Floater (eds.), Springer , 2002, pp. 287-316 [10

  2. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  3. Cloud-radiation interactions and their parameterization in climate models

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.

  4. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  5. Coarse-grained models using local-density potentials optimized with the relative entropy: Application to implicit solvation

    NASA Astrophysics Data System (ADS)

    Sanyal, Tanmoy; Shell, M. Scott

    2016-07-01

    Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.

  6. Using Laboratory Experiments to Improve Ice-Ocean Parameterizations

    NASA Astrophysics Data System (ADS)

    McConnochie, C. D.; Kerr, R. C.

    2017-12-01

    Numerical models of ice-ocean interactions are typically unable to resolve the transport of heat and salt to the ice face. Instead, models rely upon parameterizations that have not been sufficiently validated by observations. Recent laboratory experiments of ice-saltwater interactions allow us to test the standard parameterization of heat and salt transport to ice faces - the three-equation model. The three-equation model predicts that the melt rate is proportional to the fluid velocity while the experimental results typically show that the melt rate is independent of the fluid velocity. By considering an analysis of the boundary layer that forms next to a melting ice face, we suggest a resolution to this disagreement. We show that the three-equation model makes the implicit assumption that the thickness of the diffusive sublayer next to the ice is set by a shear instability. However, at low flow velocities, the sublayer is instead set by a convective instability. This distinction leads to a threshold velocity of approximately 4 cm/s at geophysically relevant conditions, above which the form of the parameterization should be valid. In contrast, at flow speeds below 4 cm/s, the three-equation model will underestimate the melt rate. By incorporating such a minimum velocity into the three-equation model, predictions made by numerical simulations could be easily improved.

  7. Coarse-grained models using local-density potentials optimized with the relative entropy: Application to implicit solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu

    Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less

  8. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)

  9. The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron

    2015-03-01

    In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.

  10. A Membrane Model from Implicit Elasticity Theory

    PubMed Central

    Freed, A. D.; Liao, J.; Einstein, D. R.

    2014-01-01

    A Fungean solid is derived for membranous materials as a body defined by isotropic response functions whose mathematical structure is that of a Hookean solid where the elastic constants are replaced by functions of state derived from an implicit, thermodynamic, internal-energy function. The theory utilizes Biot’s (1939) definitions for stress and strain that, in 1-dimension, are the stress/strain measures adopted by Fung (1967) when he postulated what is now known as Fung’s law. Our Fungean membrane model is parameterized against a biaxial data set acquired from a porcine pleural membrane subjected to three, sequential, proportional, planar extensions. These data support an isotropic/deviatoric split in the stress and strain-rate hypothesized by our theory. These data also demonstrate that the material response is highly non-linear but, otherwise, mechanically isotropic. These data are described reasonably well by our otherwise simple, four-parameter, material model. PMID:24282079

  11. On the usage of classical nucleation theory in quantification of the impact of bacterial INP on weather and climate

    NASA Astrophysics Data System (ADS)

    Sahyoun, Maher; Wex, Heike; Gosewinkel, Ulrich; Šantl-Temkiv, Tina; Nielsen, Niels W.; Finster, Kai; Sørensen, Jens H.; Stratmann, Frank; Korsholm, Ulrik S.

    2016-08-01

    Bacterial ice-nucleating particles (INP) are present in the atmosphere and efficient in heterogeneous ice-nucleation at temperatures up to -2 °C in mixed-phase clouds. However, due to their low emission rates, their climatic impact was considered insignificant in previous modeling studies. In view of uncertainties about the actual atmospheric emission rates and concentrations of bacterial INP, it is important to re-investigate the threshold fraction of cloud droplets containing bacterial INP for a pronounced effect on ice-nucleation, by using a suitable parameterization that describes the ice-nucleation process by bacterial INP properly. Therefore, we compared two heterogeneous ice-nucleation rate parameterizations, denoted CH08 and HOO10 herein, both of which are based on classical-nucleation-theory and measurements, and use similar equations, but different parameters, to an empirical parameterization, denoted HAR13 herein, which considers implicitly the number of bacterial INP. All parameterizations were used to calculate the ice-nucleation probability offline. HAR13 and HOO10 were implemented and tested in a one-dimensional version of a weather-forecast-model in two meteorological cases. Ice-nucleation-probabilities based on HAR13 and CH08 were similar, in spite of their different derivation, and were higher than those based on HOO10. This study shows the importance of the method of parameterization and of the input variable, number of bacterial INP, for accurately assessing their role in meteorological and climatic processes.

  12. Distributed parameterization of complex terrain

    NASA Astrophysics Data System (ADS)

    Band, Lawrence E.

    1991-03-01

    This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.

  13. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  14. A Membrane Model from Implicit Elasticity Theory. Application to Visceral Pleura

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freed, Alan D.; Liao, Jun; Einstein, Daniel R.

    2013-11-27

    A Fungean solid is derived for membranous materials as a body defined by isotropic response functions whose mathematical structure is that of a Hookean solid where the elastic constants are replaced by functions of state derived from an implicit, thermodynamic, internal energy function. The theory utilizes Biot’s (Lond Edinb Dublin Philos Mag J Sci 27:468–489, 1939) definitions for stress and strain that, in one-dimension, are the stress/strain measures adopted by Fung (Am J Physiol 28:1532–1544, 1967) when he postulated what is now known as Fung’s law. Our Fungean membrane model is parameterized against a biaxial data set acquired from amore » porcine pleural membrane subjected to three, sequential, proportional, planar extensions. These data support an isotropic/deviatoric split in the stress and strain-rate hypothesized by our theory. These data also demonstrate that the material response is highly nonlinear but, otherwise, mechanically isotropic. These data are described reasonably well by our otherwise simple, four-parameter, material model.« less

  15. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    NASA Astrophysics Data System (ADS)

    Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark

    2018-04-01

    Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.

  16. Computation at a coordinate singularity

    NASA Astrophysics Data System (ADS)

    Prusa, Joseph M.

    2018-05-01

    Coordinate singularities are sometimes encountered in computational problems. An important example involves global atmospheric models used for climate and weather prediction. Classical spherical coordinates can be used to parameterize the manifold - that is, generate a grid for the computational spherical shell domain. This particular parameterization offers significant benefits such as orthogonality and exact representation of curvature and connection (Christoffel) coefficients. But it also exhibits two polar singularities and at or near these points typical continuity/integral constraints on dependent fields and their derivatives are generally inadequate and lead to poor model performance and erroneous results. Other parameterizations have been developed that eliminate polar singularities, but problems of weaker singularities and enhanced grid noise compared to spherical coordinates (away from the poles) persist. In this study reparameterization invariance of geometric objects (scalars, vectors and the forms generated by their covariant derivatives) is utilized to generate asymptotic forms for dependent fields of interest valid in the neighborhood of a pole. The central concept is that such objects cannot be altered by the metric structure of a parameterization. The new boundary conditions enforce symmetries that are required for transformations of geometric objects. They are implemented in an implicit polar filter of a structured grid, nonhydrostatic global atmospheric model that is simulating idealized Held-Suarez flows. A series of test simulations using different configurations of the asymptotic boundary conditions are made, along with control simulations that use the default model numerics with no absorber, at three different grid sizes. Typically the test simulations are ∼ 20% faster in wall clock time than the control-resulting from a decrease in noise at the poles in all cases. In the control simulations adverse numerical effects from the polar singularity are observed to increase with grid resolution. In contrast, test simulations demonstrate robust polar behavior independent of grid resolution.

  17. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  18. Spectral wave dissipation by submerged aquatic vegetation in a back-barrier estuary

    USGS Publications Warehouse

    Nowacki, Daniel J.; Beudin, Alexis; Ganju, Neil K.

    2017-01-01

    Submerged aquatic vegetation is generally thought to attenuate waves, but this interaction remains poorly characterized in shallow-water field settings with locally generated wind waves. Better quantification of wave–vegetation interaction can provide insight to morphodynamic changes in a variety of environments and also is relevant to the planning of nature-based coastal protection measures. Toward that end, an instrumented transect was deployed across a Zostera marina (common eelgrass) meadow in Chincoteague Bay, Maryland/Virginia, U.S.A., to characterize wind-wave transformation within the vegetated region. Field observations revealed wave-height reduction, wave-period transformation, and wave-energy dissipation with distance into the meadow, and the data informed and calibrated a spectral wave model of the study area. The field observations and model results agreed well when local wind forcing and vegetation-induced drag were included in the model, either explicitly as rigid vegetation elements or implicitly as large bed-roughness values. Mean modeled parameters were similar for both the explicit and implicit approaches, but the spectral performance of the explicit approach was poor compared to the implicit approach. The explicit approach over-predicted low-frequency energy within the meadow because the vegetation scheme determines dissipation using mean wavenumber and frequency, in contrast to the bed-friction formulations, which dissipate energy in a variable fashion across frequency bands. Regardless of the vegetation scheme used, vegetation was the most important component of wave dissipation within much of the study area. These results help to quantify the influence of submerged aquatic vegetation on wave dynamics in future model parameterizations, field efforts, and coastal-protection measures.

  19. New, Improved Bulk-microphysical Schemes for Studying Precipitation Processes in WRF. Part 1; Comparisons with Other Schemes

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Shi, J.; Chen, S. S> ; Lang, S.; Hong, S.-Y.; Thompson, G.; Peters-Lidard, C.; Hou, A.; Braun, S.; hide

    2007-01-01

    Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)

  20. Corrosion Thermodynamics of Magnesium and Alloys from First Principles as a Function of Solvation

    NASA Astrophysics Data System (ADS)

    Limmer, Krista; Williams, Kristen; Andzelm, Jan

    Thermodynamics of corrosion processes occurring on magnesium surfaces, such as hydrogen evolution and water dissociation, have been examined with density functional theory (DFT) to evaluate the effect of impurities and dilute alloying additions. The modeling of corrosion thermodynamics requires examination of species in a variety of chemical and electronic states in order to accurately represent the complex electrochemical corrosion process. In this study, DFT calculations for magnesium corrosion thermodynamics were performed with two DFT codes (VASP and DMol3), with multiple exchange-correlation functionals for chemical accuracy, as well as with various levels of implicit and explicit solvation for surfaces and solvated ions. The accuracy of the first principles calculations has been validated against Pourbaix diagrams constructed from solid, gas and solvated charged ion calculations. For aqueous corrosion, it is shown that a well parameterized implicit solvent is capable of accurately representing all but the first coordinating layer of explicit water for charged ions.

  1. Effects of surface wave breaking on the oceanic boundary layer

    NASA Astrophysics Data System (ADS)

    He, Hailun; Chen, Dake

    2011-04-01

    Existing laboratory studies suggest that surface wave breaking may exert a significant impact on the formation and evolution of oceanic surface boundary layer, which plays an important role in the ocean-atmosphere coupled system. However, present climate models either neglect the effects of wave breaking or treat them implicitly through some crude parameterization. Here we use a one-dimensional ocean model (General Ocean Turbulence Model, GOTM) to investigate the effects of wave breaking on the oceanic boundary layer on diurnal to seasonal time scales. First a set of idealized experiments are carried out to demonstrate the basic physics and the necessity to include wave breaking. Then the model is applied to simulating observations at the northern North Sea and the Ocean Weather Station Papa, which shows that properly accounting for wave breaking effects can improve model performance and help it to successfully capture the observed upper ocean variability.

  2. A coarse-grained DNA model for the prediction of current signals in DNA translocation experiments

    NASA Astrophysics Data System (ADS)

    Weik, Florian; Kesselheim, Stefan; Holm, Christian

    2016-11-01

    We present an implicit solvent coarse-grained double-stranded DNA (dsDNA) model confined to an infinite cylindrical pore that reproduces the experimentally observed current modulations of a KaCl solution at various concentrations. Our model extends previous coarse-grained and mean-field approaches by incorporating a position dependent friction term on the ions, which Kesselheim et al. [Phys. Rev. Lett. 112, 018101 (2014)] identified as an essential ingredient to correctly reproduce the experimental data of Smeets et al. [Nano Lett. 6, 89 (2006)]. Our approach reduces the computational effort by orders of magnitude compared with all-atom simulations and serves as a promising starting point for modeling the entire translocation process of dsDNA. We achieve a consistent description of the system's electrokinetics by using explicitly parameterized ions, a friction term between the DNA beads and the ions, and a lattice-Boltzmann model for the solvent.

  3. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  4. Effect of numerical dispersion as a source of structural noise in the calibration of a highly parameterized saltwater intrusion model

    USGS Publications Warehouse

    Langevin, Christian D.; Hughes, Joseph D.

    2010-01-01

    A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.

  5. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2016-04-01

    Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

  6. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    DOE PAGES

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; ...

    2015-12-21

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less

  7. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231

  8. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  9. Simultaneous optimization of biomolecular energy function on features from small molecules and macromolecules

    PubMed Central

    Park, Hahnbeom; Bradley, Philip; Greisen, Per; Liu, Yuan; Mulligan, Vikram Khipple; Kim, David E.; Baker, David; DiMaio, Frank

    2017-01-01

    Most biomolecular modeling energy functions for structure prediction, sequence design, and molecular docking, have been parameterized using existing macromolecular structural data; this contrasts molecular mechanics force fields which are largely optimized using small-molecule data. In this study, we describe an integrated method that enables optimization of a biomolecular modeling energy function simultaneously against small-molecule thermodynamic data and high-resolution macromolecular structural data. We use this approach to develop a next-generation Rosetta energy function that utilizes a new anisotropic implicit solvation model, and an improved electrostatics and Lennard-Jones model, illustrating how energy functions can be considerably improved in their ability to describe large-scale energy landscapes by incorporating both small-molecule and macromolecule data. The energy function improves performance in a wide range of protein structure prediction challenges, including monomeric structure prediction, protein-protein and protein-ligand docking, protein sequence design, and prediction of the free energy changes by mutation, while reasonably recapitulating small-molecule thermodynamic properties. PMID:27766851

  10. Critical zone evolution and the origins of organised complexity in watersheds

    NASA Astrophysics Data System (ADS)

    Harman, C.; Troch, P. A.; Pelletier, J.; Rasmussen, C.; Chorover, J.

    2012-04-01

    The capacity of the landscape to store and transmit water is the result of a historical trajectory of landscape, soil and vegetation development, much of which is driven by hydrology itself. Progress in geomorphology and pedology has produced models of surface and sub-surface evolution in soil-mantled uplands. These dissected, denuding modeled landscapes are emblematic of the kinds of dissipative self-organized flow structures whose hydrologic organization may also be understood by low-dimensional hydrologic models. They offer an exciting starting-point for examining the mapping between the long-term controls on landscape evolution and the high-frequency hydrologic dynamics. Here we build on recent theoretical developments in geomorphology and pedology to try to understand how the relative rates of erosion, sediment transport and soil development in a landscape determine catchment storage capacity and the relative dominance of runoff process, flow pathways and storage-discharge relationships. We do so by using a combination of landscape evolution models, hydrologic process models and data from a variety of sources, including the University of Arizona Critical Zone Observatory. A challenge to linking the landscape evolution and hydrologic model representations is the vast differences in the timescales implicit in the process representations. Furthermore the vast array of processes involved makes parameterization of such models an enormous challenge. The best data-constrained geomorphic transport and soil development laws only represent hydrologic processes implicitly, through the transport and weathering rate parameters. In this work we propose to avoid this problem by identifying the relationship between the landscape and soil evolution parameters and macroscopic climate and geological controls. These macroscopic controls (such as the aridity index) have two roles: 1) they express the water and energy constraints on the long-term evolution of the landscape system, and 2) they bound the range of plausible short-term hydroclimatic regimes that may drive a particular landscape's hydrologic dynamics. To ensure that the hydrologic dynamics implicit in the evolutionary parameters are compatible with the dynamics observed in the hydrologic modeling, a set of consistency checks based on flow process dominance are developed.

  11. Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols

    NASA Astrophysics Data System (ADS)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2017-09-01

    Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.

  12. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.

  13. An ellipsoid-chain model for conjugated polymer solutions

    NASA Astrophysics Data System (ADS)

    Lee, Cheng K.; Hua, Chi C.; Chen, Show A.

    2012-02-01

    We propose an ellipsoid-chain model which may be routinely parameterized to capture large-scale properties of semiflexible, amphiphilic conjugated polymers in various solvent media. The model naturally utilizes the defect locations as pivotal centers connecting adjacent ellipsoids (each currently representing ten monomer units), and a variant umbrella-sampling scheme is employed to construct the potentials of mean force (PMF) for specific solvent media using atomistic dynamics data and simplex optimization. The performances, both efficacy and efficiency, of the model are thoroughly evaluated by comparing the simulation results on long, single-chain (i.e., 300-mer) structures with those from two existing, finer-grained models for a standard conjugated polymer (i.e., poly(2-methoxy-5-(2'-ethylhexyloxy)-1,4-phenylenevinylene) or MEH-PPV) in two distinct solvents (i.e., chloroform or toluene) as well as a hybrid, binary-solvent medium (i.e., chloroform/toluene = 1:1 in number density). The coarse-grained Monte Carlo (CGMC) simulation of the ellipsoid-chain model is shown to be the most efficient—about 300 times faster than the coarse-grained molecular dynamics (CGMD) simulation of the finest CG model that employs explicit solvents—in capturing elementary single-chain structures for both single-solvent media, and is a few times faster than the coarse-grained Langevin dynamics (CGLD) simulation of another implicit-solvent polymer model with a slightly greater coarse-graining level than in the CGMD simulation. For the binary-solvent system considered, however, both of the two implicit-solvent schemes (i.e., CGMC and CGLD) fail to capture the effects of conspicuous concentration fluctuations near the polymer-solvent interface, arising from a pronounced coupling between the solvent molecules and different parts of the polymer. Essential physical implications are elaborated on the success as well as the failure of the two implicit-solvent CG schemes under varying solvent conditions. Within the ellipsoid-chain model, the impact of synthesized defects on local segmental ordering as well as bulk chain conformation is also scrutinized, and essential consequences in practical applications discussed. In future perspectives, we remark on strategy that takes advantage of the coordination among various CG models and simulation schemes to warrant computational efficiency and accuracy, with the anticipated capability of simulating larger-scale, many-chain aggregate systems.

  14. Scaling analysis of cloud and rain water in marine stratocumulus and implications for scale-aware microphysical parameterizations

    NASA Astrophysics Data System (ADS)

    Witte, M.; Morrison, H.; Jensen, J. B.; Bansemer, A.; Gettelman, A.

    2017-12-01

    The spatial covariance of cloud and rain water (or in simpler terms, small and large drops, respectively) is an important quantity for accurate prediction of the accretion rate in bulk microphysical parameterizations that account for subgrid variability using assumed probability density functions (pdfs). Past diagnoses of this covariance from remote sensing, in situ measurements and large eddy simulation output have implicitly assumed that the magnitude of the covariance is insensitive to grain size (i.e. horizontal resolution) and averaging length, but this is not the case because both cloud and rain water exhibit scale invariance across a wide range of scales - from tens of centimeters to tens of kilometers in the case of cloud water, a range that we will show is primarily limited by instrumentation and sampling issues. Since the individual variances systematically vary as a function of spatial scale, it should be expected that the covariance follows a similar relationship. In this study, we quantify the scaling properties of cloud and rain water content and their covariability from high frequency in situ aircraft measurements of marine stratocumulus taken over the southeastern Pacific Ocean aboard the NSF/NCAR C-130 during the VOCALS-REx field experiment of October-November 2008. First we confirm that cloud and rain water scale in distinct manners, indicating that there is a statistically and potentially physically significant difference in the spatial structure of the two fields. Next, we demonstrate that the covariance is a strong function of spatial scale, which implies important caveats regarding the ability of limited-area models with domains smaller than a few tens of kilometers across to accurately reproduce the spatial organization of precipitation. Finally, we present preliminary work on the development of a scale-aware parameterization of cloud-rain water subgrid covariability based in multifractal analysis intended for application in large-scale model microphysics schemes.

  15. Charge-dependent non-bonded interaction methods for use in quantum mechanical modeling of condensed phase reactions

    NASA Astrophysics Data System (ADS)

    Kuechler, Erich R.

    Molecular modeling and computer simulation techniques can provide detailed insight into biochemical phenomena. This dissertation describes the development, implementation and parameterization of two methods for the accurate modeling of chemical reactions in aqueous environments, with a concerted scientific effort towards the inclusion of charge-dependent non-bonded non-electrostatic interactions into currently used computational frameworks. The first of these models, QXD, modifies interactions in a hybrid quantum mechanical/molecular (QM/MM) mechanical framework to overcome the current limitations of 'atom typing' QM atoms; an inaccurate and non-intuitive practice for chemically active species as these static atom types are dictated by the local bonding and electrostatic environment of the atoms they represent, which will change over the course of the simulation. The efficacy QXD model is demonstrated using a specific reaction parameterization (SRP) of the Austin Model 1 (AM1) Hamiltonian by simultaneously capturing the reaction barrier for chloride ion attack on methylchloride in solution and the solvation free energies of a series of compounds including the reagents of the reaction. The second, VRSCOSMO, is an implicit solvation model for use with the DFTB3/3OB Hamiltonian for biochemical reactions; allowing for accurate modeling of ionic compound solvation properties while overcoming the discontinuous nature of conventional PCM models when chemical reaction coordinates. The VRSCOSMO model is shown to accurately model the solvation properties of over 200 chemical compounds while also providing smooth, continuous reaction surfaces for a series of biologically motivated phosphoryl transesterification reactions. Both of these methods incorporate charge-dependent behavior into the non-bonded interactions variationally, allowing the 'size' of atoms to change in meaningful ways with respect to changes in local charge state, as to provide an accurate, predictive and transferable models for the interactions between the quantum mechanical system and their solvated surroundings.

  16. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  17. Differential geometry based solvation model. III. Quantum formulation

    PubMed Central

    Chen, Zhan; Wei, Guo-Wei

    2011-01-01

    Solvation is of fundamental importance to biomolecular systems. Implicit solvent models, particularly those based on the Poisson-Boltzmann equation for electrostatic analysis, are established approaches for solvation analysis. However, ad hoc solvent-solute interfaces are commonly used in the implicit solvent theory. Recently, we have introduced differential geometry based solvation models which allow the solvent-solute interface to be determined by the variation of a total free energy functional. Atomic fixed partial charges (point charges) are used in our earlier models, which depends on existing molecular mechanical force field software packages for partial charge assignments. As most force field models are parameterized for a certain class of molecules or materials, the use of partial charges limits the accuracy and applicability of our earlier models. Moreover, fixed partial charges do not account for the charge rearrangement during the solvation process. The present work proposes a differential geometry based multiscale solvation model which makes use of the electron density computed directly from the quantum mechanical principle. To this end, we construct a new multiscale total energy functional which consists of not only polar and nonpolar solvation contributions, but also the electronic kinetic and potential energies. By using the Euler-Lagrange variation, we derive a system of three coupled governing equations, i.e., the generalized Poisson-Boltzmann equation for the electrostatic potential, the generalized Laplace-Beltrami equation for the solvent-solute boundary, and the Kohn-Sham equations for the electronic structure. We develop an iterative procedure to solve three coupled equations and to minimize the solvation free energy. The present multiscale model is numerically validated for its stability, consistency and accuracy, and is applied to a few sets of molecules, including a case which is difficult for existing solvation models. Comparison is made to many other classic and quantum models. By using experimental data, we show that the present quantum formulation of our differential geometry based multiscale solvation model improves the prediction of our earlier models, and outperforms some explicit solvation model. PMID:22112067

  18. Model weights and the foundations of multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2006-01-01

    Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.

  19. Characteristics of energy exchange between inter- and intramolecular degrees of freedom in crystalline 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) with implications for coarse-grained simulations of shock waves in polyatomic molecular crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroonblawd, Matthew P.; Sewell, Thomas D., E-mail: sewellt@missouri.edu; Maillet, Jean-Bernard, E-mail: jean-bernard.maillet@cea.fr

    2016-02-14

    In this report, we characterize the kinetics and dynamics of energy exchange between intramolecular and intermolecular degrees of freedom (DoF) in crystalline 1,3,5-triamino-2,4,6-trinitrobenzene (TATB). All-atom molecular dynamics (MD) simulations are used to obtain predictions for relaxation from certain limiting initial distributions of energy between the intra- and intermolecular DoF. The results are used to parameterize a coarse-grained Dissipative Particle Dynamics at constant Energy (DPDE) model for TATB. Each TATB molecule in the DPDE model is represented as an all-atom, rigid-molecule mesoparticle, with explicit external (molecular translational and rotational) DoF and coarse-grained implicit internal (vibrational) DoF. In addition to conserving linearmore » and angular momentum, the DPDE equations of motion conserve the total system energy provided that particles can exchange energy between their external and internal DoF. The internal temperature of a TATB molecule is calculated using an internal equation of state, which we develop here, and the temperatures of the external and internal DoF are coupled using a fluctuation-dissipation relation. The DPDE force expression requires specification of the input parameter σ that determines the rate at which energy is exchanged between external and internal DoF. We adjusted σ based on the predictions for relaxation processes obtained from MD simulations. The parameterized DPDE model was employed in large-scale simulations of shock compression of TATB. We show that the rate of energy exchange governed by σ can significantly influence the transient behavior of the system behind the shock.« less

  20. On the representation of subsea aquitards in models of offshore fresh groundwater

    NASA Astrophysics Data System (ADS)

    Solórzano-Rivas, S. C.; Werner, A. D.

    2018-02-01

    Fresh groundwater is widespread globally in offshore aquifers, and is particularly dependent on the properties of offshore aquitards, which inhibit seawater-freshwater mixing thereby allowing offshore freshwater to persist. However, little is known of the salinity distribution in subsea aquitards, especially in relation to the offshore freshwater distribution. This is critical for the application of recent analytical solutions to subsea freshwater extent given requisite assumptions about aquitard salinity. In this paper, we use numerical simulation to explore the extent of offshore freshwater in simplified situations of subsea aquifers and overlying aquitards, including in relation to the upward leakage of freshwater. The results show that available analytical solutions significantly overestimate the offshore extent of upwelling freshwater due to the presumption of seawater in the aquitard, whereas the seawater wedge toe is less sensitive to the assumed aquitard salinity. We also explore the use of implicit, conductance-based representations of the aquitard (i.e., using the popular SEAWAT code), and find that SEAWAT's implicit approach (i.e., GHB package) can represent the offshore distance of upwelling freshwater using a novel parameterization strategy. The results show that an estimate of the upward freshwater flow that is required to freshen the aquitard is associated with the dimensionless Rayleigh number, whereby the critical Rayleigh number that distinguishes fresh and saline regions (based on the position of the 0.5 isochlor) within the aquitard is approximately 2.

  1. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  2. Numerical modelling of the buoyant marine microplastics in the South-Eastern Baltic Sea

    NASA Astrophysics Data System (ADS)

    Bagaev, Andrei; Mizyuk, Artem; Chubarenko, Irina; Khatmullilna, Liliya

    2017-04-01

    Microplastics is a burning issue in the marine pollution science. Its sources, ways of propagation and final destiny pose a lot of questions to the modern oceanographers. Hence, a numerical model is an optimal tool for reconstruction of microplastics pathways and fate. Within the MARBLE project (lamp.ocean.ru), a model of Lagrangian particles transport was developed. It was tested coupled with oceanographic transport fields from the operational oceanography product of Copernicus Marine Monitoring Environment Service. Our model deals with two major types of microplastics such as microfibres and buoyant spheroidal particles. We are currently working to increase the grid resolution by means of the NEMO regional configuration for the south-eastern Baltic Sea. Several expeditions were organised to the three regions of the Baltic Sea (the Gotland, the Bornholm, and the Gdansk basins). Water samples from the surface and different water layers were collected, processed, and analysed by our team. A set of laboratory experiments was specifically designed to establish the settling velocity of particles of various shapes and densities. The analysis in question provided us with the understanding necessary for the model to reproduce the large-scale dynamics of microfibres. In the simulation, particles were spreading from the shore to the deep sea, slowly sinking to the bottom, while decreasing in quantity due to conditional sedimentation. Our model is expected to map out the microplastics life cycle and to account for its distribution patterns under the impact of wind and currents. For this purpose, we have already included the parameterization for the wind drag force applied to a particle. Initial results of numerical experiments seem to indicate the importance of proper implicit parameterization of the particle dynamics at the vertical solid boundary. Our suggested solutions to that problem will be presented at the EGU-2017. The MARBLE project is supported by Russian Science Foundation grant #15-17-10020.

  3. Three-dimensional aerodynamic shape optimization of supersonic delta wings

    NASA Technical Reports Server (NTRS)

    Burgreen, Greg W.; Baysal, Oktay

    1994-01-01

    A recently developed three-dimensional aerodynamic shape optimization procedure AeSOP(sub 3D) is described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D Computational Fluid Dynamics (CFD) methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated in the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1. 5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.

  4. On the role of surface friction in tropical cyclone intensification

    NASA Astrophysics Data System (ADS)

    Wang, Yuqing

    2017-04-01

    Recent studies have debated on whether surface friction is positive or negative to tropical cyclone intensification in the view on angular momentum budget. That means whether the frictionally induced inward angular momentum transport can overcome the loss of angular momentum to the surface due to surface friction itself. Although this issue is still under debate, this study investigates another implicit dynamical effect, which modifies the radial location and strength of eyewall convection. We found that moderate surface friction is necessary for rapid intensity of tropical cyclones. This is demonstrated first by a simple coupled dynamical system that couples a multi-level boundary layer model and a shallow water equation model above with mass source parameterized by mass flux from the boundary layer model below, and then by a full physics model. The results show that surface friction leads to the inward penetration of inflow under the eyewall, shift the boundary layer mass convergence slightly inside the radius of maximum wind, and enhance the upward mass flux, and thus diabatic heating in the eyewall and intensification rate of a TC. This intensification process is different from the direct angular momentum budget previously used to explain the role of surface friction in tropical cyclone intensification.

  5. The response of the SSM/I to the marine environment. Part 2: A parameterization of the effect of the sea surface slope distribution on emission and reflection

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Katsaros, Kristina B.

    1994-01-01

    Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.

  6. Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent; Gettelman, Andrew; Morrison, Hugh

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less

  7. Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations

    DOE PAGES

    Liu, Gang; Liu, Yangang; Endo, Satoshi

    2013-02-01

    Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less

  8. Development of a two-dimensional zonally averaged statistical-dynamical model. III - The parameterization of the eddy fluxes of heat and moisture

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Yao, Mao-Sung

    1990-01-01

    A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.

  9. Haptics-based dynamic implicit solid modeling.

    PubMed

    Hua, Jing; Qin, Hong

    2004-01-01

    This paper systematically presents a novel, interactive solid modeling framework, Haptics-based Dynamic Implicit Solid Modeling, which is founded upon volumetric implicit functions and powerful physics-based modeling. In particular, we augment our modeling framework with a haptic mechanism in order to take advantage of additional realism associated with a 3D haptic interface. Our dynamic implicit solids are semi-algebraic sets of volumetric implicit functions and are governed by the principles of dynamics, hence responding to sculpting forces in a natural and predictable manner. In order to directly manipulate existing volumetric data sets as well as point clouds, we develop a hierarchical fitting algorithm to reconstruct and represent discrete data sets using our continuous implicit functions, which permit users to further design and edit those existing 3D models in real-time using a large variety of haptic and geometric toolkits, and visualize their interactive deformation at arbitrary resolution. The additional geometric and physical constraints afford more sophisticated control of the dynamic implicit solids. The versatility of our dynamic implicit modeling enables the user to easily modify both the geometry and the topology of modeled objects, while the inherent physical properties can offer an intuitive haptic interface for direct manipulation with force feedback.

  10. The zonally averaged transport characteristics of the atmosphere as determined by a general circulation model

    NASA Technical Reports Server (NTRS)

    Plumb, R. A.

    1985-01-01

    Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.

  11. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    NASA Astrophysics Data System (ADS)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.

  12. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    USGS Publications Warehouse

    White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John

    2017-01-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.

  13. Impact of Apex Model parameterization strategy on estimated benefit of conservation practices

    USDA-ARS?s Scientific Manuscript database

    Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...

  14. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-03-18

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less

  15. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    NASA Astrophysics Data System (ADS)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual sub-models, such as those mentioned above, allow us to answer basic research questions without the parameterization necessary in higher resolution models. Thus, this type of model offers an alternative to higher-resolution models.

  16. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  17. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  18. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  19. Parameterization of backbone flexibility in a coarse-grained force field for proteins (COFFDROP) derived from all-atom explicit-solvent molecular dynamics simulations of all possible two-residue peptides

    PubMed Central

    Frembgen-Kesner, Tamara; Andrews, Casey T.; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A.; Jain, Aakash; Olayiwola, Oluwatoni; Weishaar, Mitch R.; Elcock, Adrian H.

    2015-01-01

    Recently, we reported the parameterization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs, and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downwards in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multi-domain proteins connected by flexible linkers. PMID:26574429

  20. Some conservation issues for the dynamical cores of NWP and climate models

    NASA Astrophysics Data System (ADS)

    Thuburn, J.

    2008-03-01

    The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.

  1. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  2. Assimilation of Leaf and Canopy Spectroscopic Data to Improve the Representation of Vegetation Dynamics in Terrestrial Ecosystem Models

    NASA Astrophysics Data System (ADS)

    Serbin, S. P.; Dietze, M.; Desai, A. R.; LeBauer, D.; Viskari, T.; Kooper, R.; McHenry, K. G.; Townsend, P. A.

    2013-12-01

    The ability to seamlessly integrate information on vegetation structure and function across a continuum of scales, from field to satellite observations, greatly enhances our ability to understand how terrestrial vegetation-atmosphere interactions change over time and in response to disturbances. In particular, terrestrial ecosystem models require detailed information on ecosystem states and canopy properties in order to properly simulate the fluxes of carbon (C), water and energy from the land to the atmosphere as well as address the vulnerability of ecosystems to environmental and other perturbations. Over the last several decades the amount of available data to constrain ecological predictions has increased substantially, resulting in a progressively data-rich era for global change research. In particular remote sensing data, specifically optical data (leaf and canopy), offers the potential for an important and direct data constraint on ecosystem model projections of C and energy fluxes. Here we highlight the utility of coupling information provided through the Ecosystem Spectral Information System (EcoSIS) with complex process models through the Predictive Ecosystem Analyzer (PEcAn; http://www.pecanproject.org/) eco-informatics framework as a means to improve the description of canopy optical properties, vegetation composition, and modeled radiation balance. We also present this an efficient approach for understanding and correcting implicit assumptions and model structural deficiencies. We first illustrate the challenges and issues in adequately characterizing ecosystem fluxes with the Ecosystem Demography model (ED2, Medvigy et al., 2009) due to improper parameterization of leaf and canopy properties, as well as assumptions describing radiative transfer within the canopy. ED2 is especially relevant to these efforts because it contains a sophisticated structure for scaling ecological processes across a range of spatial scales: from the tree-level (demography, physiology) to the distribution of stands across a landscape, which allows for the direct use of remotely sensed data at the appropriate spatial scale. A sensitivity analysis is employed within PEcAn to illustrate the influence of ED2 parameterizations on modeled C and energy fluxes for a northern temperate forest ecosystem as an example of the need for more detailed information on leaf and canopy optical properties. We then demonstrate a data assimilation approach to synthesize spectral data contained within EcoSIS in order to update model parameterizations across key vegetation plant functional types, as well as a means to update vegetation state information (i.e. composition, LAI) and improve the description of radiation transfer through model structural updates. A better understanding of the radiation balance of ecosystems will improve regional and global scale C and energy balance projections.

  3. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  4. Parameterization of ALMANAC crop simulation model for non-irrigated dry bean in semi-arid temperate areas in Mexico

    USDA-ARS?s Scientific Manuscript database

    Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...

  5. Diabatic forcing and intialization with assimilation of cloud water and rainwater in a forecast model

    NASA Technical Reports Server (NTRS)

    Raymond, William H.; Olson, William S.; Callan, Geary

    1995-01-01

    In this study, diabatic forcing, and liquid water assimilation techniques are tested in a semi-implicit hydrostatic regional forecast model containing explicit representations of grid-scale cloud water and rainwater. Diabatic forcing, in conjunction with diabatic contributions in the initialization, is found to help the forecast retain the diabatic signal found in the liquid water or heating rate data, consequently reducing the spinup time associated with grid-scale precipitation processes. Both observational Special Sensor Microwave/Imager (SSM/I) and model-generated data are used. A physical retrieval method incorporating SSM/I radiance data is utilized to estimate the 3D distribution of precipitating storms. In the retrieval method the relationship between precipitation distributions and upwelling microwave radiances is parameterized, based upon cloud ensemble-radiative model simulations. Regression formulae relating vertically integrated liquid and ice-phase precipitation amounts to latent heating rates are also derived from the cloud ensemble simulations. Thus, retrieved SSM/I precipitation structures can be used in conjunction with the regression-formulas to infer the 3D distribution of latent heating rates. These heating rates are used directly in the forecast model to help initiate Tropical Storm Emily (21 September 1987). The 14-h forecast of Emily's development yields atmospheric precipitation water contents that compare favorably with coincident SSM/I estimates.

  6. Aerothermodynamic shape optimization of hypersonic blunt bodies

    NASA Astrophysics Data System (ADS)

    Eyi, Sinan; Yumuşak, Mine

    2015-07-01

    The aim of this study is to develop a reliable and efficient design tool that can be used in hypersonic flows. The flow analysis is based on the axisymmetric Euler/Navier-Stokes and finite-rate chemical reaction equations. The equations are coupled simultaneously and solved implicitly using Newton's method. The Jacobian matrix is evaluated analytically. A gradient-based numerical optimization is used. The adjoint method is utilized for sensitivity calculations. The objective of the design is to generate a hypersonic blunt geometry that produces the minimum drag with low aerodynamic heating. Bezier curves are used for geometry parameterization. The performances of the design optimization method are demonstrated for different hypersonic flow conditions.

  7. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  8. Evaluation of Transport in the Lower Tropical Stratosphere in a Global Chemistry and Transport Model

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Schoeberl, Mark R.; Rood, Richard B.; Pawson, Steven

    2002-01-01

    A general circulation model (GCM) relies on various physical parameterizations and provides a solution to the atmospheric equations of motion. A data assimilation system (DAS) combines information from observations with a GCM forecast and produces analyzed meteorological fields that represent the observed atmospheric state. An off-line chemistry and transport model (CTM) can use winds and temperatures from a either a GCM or a DAS. The latter application is in common usage for interpretation of observations from various platforms under the assumption that the DAS transport represents the actual atmospheric transport. Here we compare the transport produced by a DAS with that produced by the particular GCM that is combined with observations to produce the analyzed fields. We focus on transport in the tropics and middle latitudes by comparing the age-of-air inferred from observations of SF6 and CO2 with the age-of-air calculated using GCM fields and DAS fields. We also compare observations of ozone, total reactive nitrogen, and methane with results from the two simulations. These comparisons show that DAS fields produce rapid upward tropical transport and excessive mixing between the tropics and middle latitudes. The unrealistic transport produced by the DAS fields may be due to implicit forcing that is required by the assimilation process when there is bias between the GCM forecast and observations that are combined to produce the analyzed fields. For example, the GCM does not produce a quasi-biennial oscillation (QBO). The QBO is present in the analyzed fields because it is present in the observations, and systematic implicit forcing is required by the DAS. Any systematic bias between observations and the GCM forecast used to produce the DAS analysis is likely to corrupt the transport produced by the analyzed fields. Evaluation of transport in the lower tropical stratosphere in a global chemistry and transport model.

  9. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE PAGES

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...

    2017-09-14

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  10. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  11. Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Fox-Kemper, B.

    2016-02-01

    The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  12. Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations

    PubMed Central

    Nigh, Gordon

    2015-01-01

    Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472

  13. Spectral cumulus parameterization based on cloud-resolving model

    NASA Astrophysics Data System (ADS)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  14. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  15. FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard C. J. Somerville

    2009-02-27

    Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less

  16. Anisotropic solvent model of the lipid bilayer. 1. Parameterization of long-range electrostatics and first solvation shell effects.

    PubMed

    Lomize, Andrei L; Pogozheva, Irina D; Mosberg, Henry I

    2011-04-25

    A new implicit solvation model was developed for calculating free energies of transfer of molecules from water to any solvent with defined bulk properties. The transfer energy was calculated as a sum of the first solvation shell energy and the long-range electrostatic contribution. The first term was proportional to solvent accessible surface area and solvation parameters (σ(i)) for different atom types. The electrostatic term was computed as a product of group dipole moments and dipolar solvation parameter (η) for neutral molecules or using a modified Born equation for ions. The regression coefficients in linear dependencies of solvation parameters σ(i) and η on dielectric constant, solvatochromic polarizability parameter π*, and hydrogen-bonding donor and acceptor capacities of solvents were optimized using 1269 experimental transfer energies from 19 organic solvents to water. The root-mean-square errors for neutral compounds and ions were 0.82 and 1.61 kcal/mol, respectively. Quantification of energy components demonstrates the dominant roles of hydrophobic effect for nonpolar atoms and of hydrogen-bonding for polar atoms. The estimated first solvation shell energy outweighs the long-range electrostatics for most compounds including ions. The simplicity and computational efficiency of the model allows its application for modeling of macromolecules in anisotropic environments, such as biological membranes.

  17. Extensions and applications of a second-order landsurface parameterization

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1983-01-01

    Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.

  18. Development and Testing of Coupled Land-surface, PBL and Shallow/Deep Convective Parameterizations within the MM5

    NASA Technical Reports Server (NTRS)

    Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.

    2000-01-01

    The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.

  19. Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2017-07-01

    Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.

  20. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  1. Dynamically Consistent Parameterization of Mesoscale Eddies This work aims at parameterization of eddy effects for use in non-eddy-resolving ocean models and focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones.

    NASA Astrophysics Data System (ADS)

    Berloff, P. S.

    2016-12-01

    This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.

  2. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  3. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    NASA Technical Reports Server (NTRS)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  4. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  5. Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model

    NASA Astrophysics Data System (ADS)

    Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno

    2018-06-01

    A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.

  6. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  7. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    NASA Astrophysics Data System (ADS)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  8. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  9. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  10. Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust

    NASA Astrophysics Data System (ADS)

    Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.

    2016-05-01

    Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.

  11. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  12. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    NASA Astrophysics Data System (ADS)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  13. Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed

    NASA Astrophysics Data System (ADS)

    Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.

    2017-12-01

    Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.

  14. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. Themore » chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.« less

  15. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  16. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Höft, J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  17. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  18. A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.

    2010-12-01

    A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.

  19. Increasing women's aspirations and achievement in science: The effect of role models on implicit cognitions

    NASA Astrophysics Data System (ADS)

    Phelan, Julie E.

    This research investigated the role of implicit science beliefs in the gender gap in science aspirations and achievement, with the goal of testing identification with a female role model as a potential intervention strategy for increasing women's representation in science careers. At Time 1, women's implicit science stereotyping (i.e., associating men more than women with science) was linked to more negative (implicit and explicit) attitudes towards science and less identification with science. For men, stereotypes were either non-significantly or positively related to science attitudes and identification. Time 2 examined the influence of implicit and explicit science cognitions on students' science aspirations and achievement, and found that implicit stereotyping, attitudes, and identification were all unique predictors of science aspirations, but not achievement. Of more importance, Time 2 examined the influence of science role models, and found that identification with a role model of either gender reduced women's implicit science stereotyping and increased their positive attitudes toward science. Implications for decreasing the gender gap in advanced science achievement are discussed.

  20. The Grell-Freitas Convection Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Technical Reports Server (NTRS)

    Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.

    2017-01-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.

  1. Thermodynamics, morphology, and kinetics of early-stage self-assembly of π-conjugated oligopeptides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Synthetic oligopeptides containing π-conjugated cores self-assemble novel materials with attractive electronic and photophysical properties. All-atom, explicit solvent molecular dynamics simulations of Asp-Phe-Ala-Gly-OPV3-Gly-Ala-Phe-Asp peptides were used to parameterize an implicit solvent model to simulate early-stage self-assembly. Under low-pH conditions, peptides assemble into β-sheet-like stacks with strongly favorable monomer association free energies of ΔF ≈ -25kBT. Aggregation at high-pH produces disordered aggregates destabilized by Coulombic repulsion between negatively charged Asp termini (ΔF ≈ -5kBT). In simulations of hundreds of monomers over 70 ns we observe the spontaneous formation of up to undecameric aggregates under low-pH conditions. Modeling assembly as a continuous-time Markovmore » process, we infer transition rates between different aggregate sizes and microsecond relaxation times for early-stage assembly. Our data suggests a hierarchical model of assembly in which peptides coalesce into small clusters over tens of nanoseconds followed by structural ripening and diffusion limited aggregation on longer time scales. This work provides new molecular-level understanding of early-stage assembly, and a means to study the impact of peptide sequence and aromatic core chemistry upon the thermodynamics, assembly kinetics, and morphology of the supramolecular aggregates.« less

  2. Theoretical modeling and experimental validation of a torsional piezoelectric vibration energy harvesting system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Zhou, Wanlu; Kaluvan, Suresh; Zhang, Haifeng; Zuo, Lei

    2018-04-01

    Vibration energy harvesting has been extensively studied in recent years to explore a continuous power source for sensor networks and low-power electronics. Torsional vibration widely exists in mechanical engineering; however, it has not yet been well exploited for energy harvesting. This paper presents a theoretical model and an experimental validation of a torsional vibration energy harvesting system comprised of a shaft and a shear mode piezoelectric transducer. The piezoelectric transducer position on the surface of the shaft is parameterized by two variables that are optimized to obtain the maximum power output. The piezoelectric transducer can work in d 15 mode (pure shear mode), coupled mode of d 31 and d 33, and coupled mode of d 33, d 31 and d 15, respectively, when attached at different angles. Approximate expressions of voltage and power are derived from the theoretical model, which gave predictions in good agreement with analytical solutions. Physical interpretations on the implicit relationship between the power output and the position parameters of the piezoelectric transducer is given based on the derived approximate expression. The optimal position and angle of the piezoelectric transducer is determined, in which case, the transducer works in the coupled mode of d 15, d 31 and d 33.

  3. Thermodynamics, morphology, and kinetics of early-stage self-assembly of π-conjugated oligopeptides

    DOE PAGES

    None, None

    2016-03-22

    Synthetic oligopeptides containing π-conjugated cores self-assemble novel materials with attractive electronic and photophysical properties. All-atom, explicit solvent molecular dynamics simulations of Asp-Phe-Ala-Gly-OPV3-Gly-Ala-Phe-Asp peptides were used to parameterize an implicit solvent model to simulate early-stage self-assembly. Under low-pH conditions, peptides assemble into β-sheet-like stacks with strongly favorable monomer association free energies of ΔF ≈ -25kBT. Aggregation at high-pH produces disordered aggregates destabilized by Coulombic repulsion between negatively charged Asp termini (ΔF ≈ -5kBT). In simulations of hundreds of monomers over 70 ns we observe the spontaneous formation of up to undecameric aggregates under low-pH conditions. Modeling assembly as a continuous-time Markovmore » process, we infer transition rates between different aggregate sizes and microsecond relaxation times for early-stage assembly. Our data suggests a hierarchical model of assembly in which peptides coalesce into small clusters over tens of nanoseconds followed by structural ripening and diffusion limited aggregation on longer time scales. This work provides new molecular-level understanding of early-stage assembly, and a means to study the impact of peptide sequence and aromatic core chemistry upon the thermodynamics, assembly kinetics, and morphology of the supramolecular aggregates.« less

  4. A Coarse Grained Model for Methylcellulose: Spontaneous Ring Formation at Elevated Temperature

    NASA Astrophysics Data System (ADS)

    Huang, Wenjun; Larson, Ronald

    Methylcellulose (MC) is widely used as food additives and pharma applications, where its thermo-reversible gelation behavior plays an important role. To date the gelation mechanism is not well understood, and therefore attracts great research interest. In this study, we adopted coarse-grained (CG) molecular dynamics simulations to model the MC chains, including the homopolymers and random copolymers that models commercial METHOCEL A, in an implicit water environment, where each MC monomer modeled with a single bead. The simulations are carried using a LAMMPS program. We parameterized our CG model using the radial distribution functions from atomistic simulations of short MC oligomers, extrapolating the results to long chains. We used dissociation free energy to validate our CG model against the atomistic model. The CG model captured the effects of monomer substitution type and temperature from the atomistic simulations. We applied this CG model to simulate single chains up to 1000 monomers long and obtained persistence lengths that are close to those determined from experiment. We observed the chain collapse transition for random copolymer at 600 monomers long at 50C. The chain collapsed into a stable ring structure with outer diameter around 14nm, which appears to be a precursor to the fibril structure observed in the methylcellulose gel observed by Lodge et al. in the recent studies. Our CG model can be extended to other MC derivatives for studying the interaction between these polymers and small molecules, such as hydrophobic drugs.

  5. [Effects of an implicit internal working model on attachment in information processing assessed using Go/No-Go Association Task].

    PubMed

    Fujii, Tsutomu; Uebuchi, Hisashi; Yamada, Kotono; Saito, Masahiro; Ito, Eriko; Tonegawa, Akiko; Uebuchi, Marie

    2015-06-01

    The purposes of the present study were (a) to use both a relational-anxiety Go/No-Go Association Task (GNAT) and an avoidance-of-intimacy GNAT in order to assess an implicit Internal Working Model (IWM) of attachment; (b) to verify the effects of both measured implicit relational anxiety and implicit avoidance of intimacy on information processing. The implicit IWM measured by GNAT differed from the explicit IWM measured by questionnaires in terms of the effects on information processing. In particular, in subliminal priming tasks involving with others, implicit avoidance of intimacy predicted accelerated response times with negative stimulus words about attachment. Moreover, after subliminally priming stimulus words about self, implicit relational anxiety predicted delayed response times with negative stimulus words about attachment.

  6. Design and application of implicit solvent models in biomolecular simulations.

    PubMed

    Kleinjung, Jens; Fraternali, Franca

    2014-04-01

    We review implicit solvent models and their parametrisation by introducing the concepts and recent devlopments of the most popular models with a focus on parametrisation via force matching. An overview of recent applications of the solvation energy term in protein dynamics, modelling, design and prediction is given to illustrate the usability and versatility of implicit solvation in reproducing the physical behaviour of biomolecular systems. Limitations of implicit modes are discussed through the example of more challenging systems like nucleic acids and membranes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  8. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.

  9. How to assess the impact of a physical parameterization in simulations of moist convection?

    NASA Astrophysics Data System (ADS)

    Grabowski, Wojciech

    2017-04-01

    A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.

  10. Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons

    NASA Technical Reports Server (NTRS)

    Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang

    2008-01-01

    Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.

  11. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    DOE PAGES

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-02

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less

  12. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  13. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donahue, Aaron S.; Caldwell, Peter M.

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less

  14. Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization

    NASA Astrophysics Data System (ADS)

    Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.

    2016-12-01

    Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.

  15. Implicit cognitive aggression among young male prisoners: Association with dispositional and current aggression.

    PubMed

    Ireland, Jane L; Adams, Christine

    2015-01-01

    The current study explores associations between implicit and explicit aggression in young adult male prisoners, seeking to apply the Reflection-Impulsive Model and indicate parity with elements of the General Aggression Model and social cognition. Implicit cognitive aggressive processing is not an area that has been examined among prisoners. Two hundred and sixty two prisoners completed an implicit cognitive aggression measure (Puzzle Test) and explicit aggression measures, covering current behaviour (DIPC-R) and aggression disposition (AQ). It was predicted that dispositional aggression would be predicted by implicit cognitive aggression, and that implicit cognitive aggression would predict current engagement in aggressive behaviour. It was also predicted that more impulsive implicit cognitive processing would associate with aggressive behaviour whereas cognitively effortful implicit cognitive processing would not. Implicit aggressive cognitive processing was associated with increased dispositional aggression but not current reports of aggressive behaviour. Impulsive implicit cognitive processing of an aggressive nature predicted increased dispositional aggression whereas more cognitively effortful implicit cognitive aggression did not. The article concludes by outlining the importance of accounting for implicit cognitive processing among prisoners and the need to separate such processing into facets (i.e. impulsive vs. cognitively effortful). Implications for future research and practice in this novel area of study are indicated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  17. I can do that: the impact of implicit theories on leadership role model effectiveness.

    PubMed

    Hoyt, Crystal L; Burnette, Jeni L; Innella, Audrey N

    2012-02-01

    This research investigates the role of implicit theories in influencing the effectiveness of successful role models in the leadership domain. Across two studies, the authors test the prediction that incremental theorists ("leaders are made") compared to entity theorists ("leaders are born") will respond more positively to being presented with a role model before undertaking a leadership task. In Study 1, measuring people's naturally occurring implicit theories of leadership, the authors showed that after being primed with a role model, incremental theorists reported greater leadership confidence and less anxious-depressed affect than entity theorists following the leadership task. In Study 2, the authors demonstrated the causal role of implicit theories by manipulating participants' theory of leadership ability. They replicated the findings from Study 1 and demonstrated that identification with the role model mediated the relationship between implicit theories and both confidence and affect. In addition, incremental theorists outperformed entity theorists on the leadership task.

  18. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  19. Final Technical Report for "Reducing tropical precipitation biases in CESM"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less

  20. Investigating the predictive validity of implicit and explicit measures of motivation in problem-solving behavioural tasks.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2013-09-01

    Research into the effects of individuals'autonomous motivation on behaviour has traditionally adopted explicit measures and self-reported outcome assessment. Recently, there has been increased interest in the effects of implicit motivational processes underlying behaviour from a self-determination theory (SDT) perspective. The aim of the present research was to provide support for the predictive validity of an implicit measure of autonomous motivation on behavioural persistence on two objectively measurable tasks. SDT and a dual-systems model were adopted as frameworks to explain the unique effects offered by explicit and implicit autonomous motivational constructs on behavioural persistence. In both studies, implicit autonomous motivation significantly predicted unique variance in time spent on each task. Several explicit measures of autonomous motivation also significantly predicted persistence. Results provide support for the proposed model and the inclusion of implicit measures in research on motivated behaviour. In addition, implicit measures of autonomous motivation appear to be better suited to explaining variance in behaviours that are more spontaneous or unplanned. Future implications for research examining implicit motivation from dual-systems models and SDT approaches are outlined. © 2012 The British Psychological Society.

  1. Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182

  2. Straddling Interdisciplinary Seams: Working Safely in the Field, Living Dangerously With a Model

    NASA Astrophysics Data System (ADS)

    Light, B.; Roberts, A.

    2016-12-01

    Many excellent proposals for observational work have included language detailing how the proposers will appropriately archive their data and publish their results in peer-reviewed literature so that they may be readily available to the modeling community for parameterization development. While such division of labor may be both practical and inevitable, the assimilation of observational results and the development of observationally-based parameterizations of physical processes require care and feeding. Key questions include: (1) Is an existing parameterization accurate, consistent, and general? If not, it may be ripe for additional physics. (2) Do there exist functional working relationships between human modeler and human observationalist? If not, one or more may need to be initiated and cultivated. (3) If empirical observation and model development are a chicken/egg problem, how, given our lack of prescience and foreknowledge, can we better design observational science plans to meet the eventual demands of model parameterization? (4) Will the addition of new physics "break" the model? If so, then the addition may be imperative. In the context of these questions, we will make retrospective and forward-looking assessments of a now-decade-old numerical parameterization to treat the partitioning of solar energy at the Earth's surface where sea ice is present. While this so called "Delta-Eddington Albedo Parameterization" is currently employed in the widely-used Los Alamos Sea Ice Model (CICE) and appears to be standing the tests of accuracy, consistency, and generality, we will highlight some ideas for its ongoing development and improvement.

  3. Learning non-local dependencies.

    PubMed

    Kuhn, Gustav; Dienes, Zoltán

    2008-01-01

    This paper addresses the nature of the temporary storage buffer used in implicit or statistical learning. Kuhn and Dienes [Kuhn, G., and Dienes, Z. (2005). Implicit learning of nonlocal musical rules: implicitly learning more than chunks. Journal of Experimental Psychology-Learning Memory and Cognition, 31(6) 1417-1432] showed that people could implicitly learn a musical rule that was solely based on non-local dependencies. These results seriously challenge models of implicit learning that assume knowledge merely takes the form of linking adjacent elements (chunking). We compare two models that use a buffer to allow learning of long distance dependencies, the Simple Recurrent Network (SRN) and the memory buffer model. We argue that these models - as models of the mind - should not be evaluated simply by fitting them to human data but by determining the characteristic behaviour of each model. Simulations showed for the first time that the SRN could rapidly learn non-local dependencies. However, the characteristic performance of the memory buffer model rather than SRN more closely matched how people came to like different musical structures. We conclude that the SRN is more powerful than previous demonstrations have shown, but it's flexible learned buffer does not explain people's implicit learning (at least, the affective learning of musical structures) as well as fixed memory buffer models do.

  4. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  5. Evaluation of DNA Force Fields in Implicit Solvation

    PubMed Central

    Gaillard, Thomas; Case, David A.

    2011-01-01

    DNA structural deformations and dynamics are crucial to its interactions in the cell. Theoretical simulations are essential tools to explore the structure, dynamics, and thermodynamics of biomolecules in a systematic way. Molecular mechanics force fields for DNA have benefited from constant improvements during the last decades. Several studies have evaluated and compared available force fields when the solvent is modeled by explicit molecules. On the other hand, few systematic studies have assessed the quality of duplex DNA models when implicit solvation is employed. The interest of an implicit modeling of the solvent consists in the important gain in the simulation performance and conformational sampling speed. In this study, respective influences of the force field and the implicit solvation model choice on DNA simulation quality are evaluated. To this end, extensive implicit solvent duplex DNA simulations are performed, attempting to reach both conformational and sequence diversity convergence. Structural parameters are extracted from simulations and statistically compared to available experimental and explicit solvation simulation data. Our results quantitatively expose the respective strengths and weaknesses of the different DNA force fields and implicit solvation models studied. This work can lead to the suggestion of improvements to current DNA theoretical models. PMID:22043178

  6. Modeling stimulus variation in three common implicit attitude tasks.

    PubMed

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  7. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    NASA Astrophysics Data System (ADS)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  8. Parameterization guidelines and considerations for hydrologic models

    Treesearch

     R. W. Malone; G. Yagow; C. Baffaut; M.W  Gitau; Z. Qi; Devendra Amatya; P.B.   Parajuli; J.V. Bonta; T.R.  Green

    2015-01-01

     Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...

  9. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  10. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  11. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  12. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  13. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  14. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    EPA Science Inventory

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  15. Modelling the effects of climate change on the distribution and production of marine fishes: accounting for trophic interactions in a dynamic bioclimate envelope model.

    PubMed

    Fernandes, Jose A; Cheung, William W L; Jennings, Simon; Butenschön, Momme; de Mora, Lee; Frölicher, Thomas L; Barange, Manuel; Grant, Alastair

    2013-08-01

    Climate change has already altered the distribution of marine fishes. Future predictions of fish distributions and catches based on bioclimate envelope models are available, but to date they have not considered interspecific interactions. We address this by combining the species-based Dynamic Bioclimate Envelope Model (DBEM) with a size-based trophic model. The new approach provides spatially and temporally resolved predictions of changes in species' size, abundance and catch potential that account for the effects of ecological interactions. Predicted latitudinal shifts are, on average, reduced by 20% when species interactions are incorporated, compared to DBEM predictions, with pelagic species showing the greatest reductions. Goodness-of-fit of biomass data from fish stock assessments in the North Atlantic between 1991 and 2003 is improved slightly by including species interactions. The differences between predictions from the two models may be relatively modest because, at the North Atlantic basin scale, (i) predators and competitors may respond to climate change together; (ii) existing parameterization of the DBEM might implicitly incorporate trophic interactions; and/or (iii) trophic interactions might not be the main driver of responses to climate. Future analyses using ecologically explicit models and data will improve understanding of the effects of inter-specific interactions on responses to climate change, and better inform managers about plausible ecological and fishery consequences of a changing environment. © 2013 John Wiley & Sons Ltd.

  16. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)

    NASA Astrophysics Data System (ADS)

    Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.

    2018-04-01

    Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  17. An Initial Investigation of the Effects of Turbulence Models on the Convergence of the RK/Implicit Scheme

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rossow, C.-C.

    2008-01-01

    A three-stage Runge-Kutta (RK) scheme with multigrid and an implicit preconditioner has been shown to be an effective solver for the fluid dynamic equations. This scheme has been applied to both the compressible and essentially incompressible Reynolds-averaged Navier-Stokes (RANS) equations using the algebraic turbulence model of Baldwin and Lomax (BL). In this paper we focus on the convergence of the RK/implicit scheme when the effects of turbulence are represented by either the Spalart-Allmaras model or the Wilcox k-! model, which are frequently used models in practical fluid dynamic applications. Convergence behavior of the scheme with these turbulence models and the BL model are directly compared. For this initial investigation we solve the flow equations and the partial differential equations of the turbulence models indirectly coupled. With this approach we examine the convergence behavior of each system. Both point and line symmetric Gauss-Seidel are considered for approximating the inverse of the implicit operator of the flow solver. To solve the turbulence equations we use a diagonally dominant alternating direction implicit (DDADI) scheme. Computational results are presented for three airfoil flow cases and comparisons are made with experimental data. We demonstrate that the two-dimensional RANS equations and transport-type equations for turbulence modeling can be efficiently solved with an indirectly coupled algorithm that uses the RK/implicit scheme for the flow equations.

  18. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.

  19. The Separate Physics and Dynamics Experiment (SPADE) framework for determining resolution awareness: A case study of microphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Ma, Po-Lun; Xiao, Heng

    2013-08-29

    The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less

  20. Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2015-12-01

    Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.

  1. A Metacognitive Approach to "Implicit" and "Explicit" Evaluations: Comment on Gawronski and Bodenhausen (2006)

    ERIC Educational Resources Information Center

    Petty, Richard E.; Brinol, Pablo

    2006-01-01

    Comments on the article by B. Gawronski and G. V. Bodenhausen (see record 2006-10465-003). A metacognitive model (MCM) is presented to describe how automatic (implicit) and deliberative (explicit) measures of attitudes respond to change attempts. The model assumes that contemporary implicit measures tap quick evaluative associations, whereas…

  2. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    NASA Astrophysics Data System (ADS)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.

  3. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE PAGES

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-06

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less

  4. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less

  5. Parameterization of plume chemistry into large-scale atmospheric models: Application to aircraft NOx emissions

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.

    2009-10-01

    A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.

  6. Multiscale Modeling of Grain-Boundary Fracture: Cohesive Zone Models Parameterized From Atomistic Simulations

    NASA Technical Reports Server (NTRS)

    Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin

    2006-01-01

    A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.

  7. Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization

    NASA Technical Reports Server (NTRS)

    Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.

    2011-01-01

    The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.

  8. Observational and Modeling Studies of Clouds and the Hydrological Cycle

    NASA Technical Reports Server (NTRS)

    Somerville, Richard C. J.

    1997-01-01

    Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.

  9. Anisotropic shear dispersion parameterization for ocean eddy transport

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott; Fox-Kemper, Baylor

    2015-11-01

    The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  10. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  11. The QBO in Two GISS Global Climate Models: 1. Generation of the QBO

    NASA Technical Reports Server (NTRS)

    Rind, David; Jonas, Jeffrey A.; Balachandra, Nambath; Schmidt, Gavin A.; Lean, Judith

    2014-01-01

    The adjustment of parameterized gravity waves associated with model convection and finer vertical resolution has made possible the generation of the quasi-biennial oscillation (QBO) in two Goddard Institute for Space Studies (GISS) models, GISS Middle Atmosphere Global Climate Model III and a climate/middle atmosphere version of Model E2. Both extend from the surface to 0.002 hPa, with 2deg × 2.5deg resolution and 102 layers. Many realistic features of the QBO are simulated, including magnitude and variability of its period and amplitude. The period itself is affected by the magnitude of parameterized convective gravity wave momentum fluxes and interactive ozone (which also affects the QBO amplitude and variability), among other forcings. Although varying sea surface temperatures affect the parameterized momentum fluxes, neither aspect is responsible for the modeled variation in QBO period. Both the parameterized and resolved waves act to produce the respective easterly and westerly wind descent, although their effect is offset in altitude at each level. The modeled and observed QBO influences on tracers in the stratosphere, such as ozone, methane, and water vapor are also discussed. Due to the link between the gravity wave parameterization and the models' convection, and the dependence on the ozone field, the models may also be used to investigate how the QBO may vary with climate change.

  12. Free-form geometric modeling by integrating parametric and implicit PDEs.

    PubMed

    Du, Haixia; Qin, Hong

    2007-01-01

    Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.

  13. The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.

  14. Can a continuum solvent model reproduce the free energy landscape of a -hairpin folding in water?

    NASA Astrophysics Data System (ADS)

    Zhou, Ruhong; Berne, Bruce J.

    2002-10-01

    The folding free energy landscape of the C-terminal -hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the -hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native -strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this -hairpin. Furthermore, the -hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  15. Can a continuum solvent model reproduce the free energy landscape of a β-hairpin folding in water?

    PubMed Central

    Zhou, Ruhong; Berne, Bruce J.

    2002-01-01

    The folding free energy landscape of the C-terminal β-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the β-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native β-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this β-hairpin. Furthermore, the β-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and ≈80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields. PMID:12242327

  16. Can a continuum solvent model reproduce the free energy landscape of a beta -hairpin folding in water?

    PubMed

    Zhou, Ruhong; Berne, Bruce J

    2002-10-01

    The folding free energy landscape of the C-terminal beta-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the beta-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native beta-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this beta-hairpin. Furthermore, the beta-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and approximately equal 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  17. Constraints to Dark Energy Using PADE Parameterizations

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.

    2017-07-01

    We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.

  18. The interactions between soil-biosphere-atmosphere land surface model with a multi-energy balance (ISBA-MEB) option in SURFEXv8 - Part 1: Model description

    NASA Astrophysics Data System (ADS)

    Boone, Aaron; Samuelsson, Patrick; Gollvik, Stefan; Napoly, Adrien; Jarlan, Lionel; Brun, Eric; Decharme, Bertrand

    2017-02-01

    Land surface models (LSMs) are pushing towards improved realism owing to an increasing number of observations at the local scale, constantly improving satellite data sets and the associated methodologies to best exploit such data, improved computing resources, and in response to the user community. As a part of the trend in LSM development, there have been ongoing efforts to improve the representation of the land surface processes in the interactions between the soil-biosphere-atmosphere (ISBA) LSM within the EXternalized SURFace (SURFEX) model platform. The force-restore approach in ISBA has been replaced in recent years by multi-layer explicit physically based options for sub-surface heat transfer, soil hydrological processes, and the composite snowpack. The representation of vegetation processes in SURFEX has also become much more sophisticated in recent years, including photosynthesis and respiration and biochemical processes. It became clear that the conceptual limits of the composite soil-vegetation scheme within ISBA had been reached and there was a need to explicitly separate the canopy vegetation from the soil surface. In response to this issue, a collaboration began in 2008 between the high-resolution limited area model (HIRLAM) consortium and Météo-France with the intention to develop an explicit representation of the vegetation in ISBA under the SURFEX platform. A new parameterization has been developed called the ISBA multi-energy balance (MEB) in order to address these issues. ISBA-MEB consists in a fully implicit numerical coupling between a multi-layer physically based snowpack model, a variable-layer soil scheme, an explicit litter layer, a bulk vegetation scheme, and the atmosphere. It also includes a feature that permits a coupling transition of the snowpack from the canopy air to the free atmosphere. It shares many of the routines and physics parameterizations with the standard version of ISBA. This paper is the first of two parts; in part one, the ISBA-MEB model equations, numerical schemes, and theoretical background are presented. In part two (Napoly et al., 2016), which is a separate companion paper, a local scale evaluation of the new scheme is presented along with a detailed description of the new forest litter scheme.

  19. New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes

    NASA Astrophysics Data System (ADS)

    Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna

    2018-01-01

    We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.

  20. Increasing Women's Aspirations and Achievement in Science: The Effect of Role Models on Implicit Cognitions

    ERIC Educational Resources Information Center

    Phelan, Julie E.

    2010-01-01

    This research investigated the role of implicit science beliefs in the gender gap in science aspirations and achievement, with the goal of testing identification with a female role model as a potential intervention strategy for increasing women's representation in science careers. At Time 1, women's implicit science stereotyping (i.e., associating…

  1. Implicit alcohol attitudes predict drinking behaviour over and above intentions and willingness in young adults but willingness is more important in adolescents: Implications for the Prototype Willingness Model.

    PubMed

    Davies, Emma L; Paltoglou, Aspasia E; Foxcroft, David R

    2017-05-01

    Dual process models, such as the Prototype Willingness Model (PWM), propose to account for both intentional and reactive drinking behaviour. Current methods of measuring constructs in the PWM rely on self-report, thus require a level of conscious deliberation. Implicit measures of attitudes may overcome this limitation and contribute to our understanding of how prototypes and willingness influence alcohol consumption in young people. This study aimed to explore whether implicit alcohol attitudes were related to PWM constructs and whether they would add to the prediction of risky drinking. The study involved a cross-sectional design. The sample included 501 participants from the United Kingdom (M age 18.92; range 11-51; 63% female); 230 school pupils and 271 university students. Participants completed explicit measures of alcohol prototype perceptions, willingness, drunkenness, harms, and intentions. They also completed an implicit measure of alcohol attitudes, using the Implicit Association Test. Implicit alcohol attitudes were only weakly related to the explicit measures. When looking at the whole sample, implicit alcohol attitudes did not add to the prediction of willingness over and above prototype perceptions. However, for university students implicit attitudes added to the prediction of behaviour, over and above intentions and willingness. For school pupils, willingness was a stronger predictor of behaviour than intentions or implicit attitudes. Adding implicit measures to the PWM may contribute to our understanding of the development of alcohol behaviours in young people. Further research could explore how implicit attitudes develop alongside the shift from reactive to planned behaviour. Statement of contribution What is already known on this subject? Young people's drinking tends to occur in social situations and is driven in part by social reactions within these contexts. The Prototype Willingness Model (PWM) attempts to explain such reactive behaviour as the result of social comparison to risk prototypes, which influence willingness to drink, and subsequent behaviour. Evidence also suggests that risky drinking in young people may be influenced by implicit attitudes towards alcohol, which develop with repeated exposure to alcohol over time. One criticism of the PWM is that prototypes and willingness are usually measured using explicit measures which may not adequately capture young people's spontaneous evaluations of prototypes, or their propensity to act without forethought in a social context. What does this study add? This study is novel in exploring the addition of implicit alcohol attitudes to the social reaction pathway in the model in order to understand more about these reactive constructs. Implicit alcohol attitudes added to the prediction of behaviour, over and above intentions and willingness for university students. For school pupils, willingness was a stronger predictor of behaviour than intentions or implicit attitudes. Findings suggest that adding implicit alcohol attitudes into the PWM might be able to explain the shift from reactive to intentional drinking behaviours with age and experience. © 2016 The British Psychological Society.

  2. The Impact of Parameterized Convection on Climatological Precipitation in Atmospheric Global Climate Models

    NASA Astrophysics Data System (ADS)

    Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.

    2018-04-01

    Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.

  3. Assessing the CAM5 Physics Suite in the WRF-Chem Model: Implementation, Resolution Sensitivity, and a First Evaluation for a Regional Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Po-Lun; Rasch, Philip J.; Fast, Jerome D.

    A suite of physical parameterizations (deep and shallow convection, turbulent boundary layer, aerosols, cloud microphysics, and cloud fraction) from the global climate model Community Atmosphere Model version 5.1 (CAM5) has been implemented in the regional model Weather Research and Forecasting with chemistry (WRF-Chem). A downscaling modeling framework with consistent physics has also been established in which both global and regional simulations use the same emissions and surface fluxes. The WRF-Chem model with the CAM5 physics suite is run at multiple horizontal resolutions over a domain encompassing the northern Pacific Ocean, northeast Asia, and northwest North America for April 2008 whenmore » the ARCTAS, ARCPAC, and ISDAC field campaigns took place. These simulations are evaluated against field campaign measurements, satellite retrievals, and ground-based observations, and are compared with simulations that use a set of common WRF-Chem Parameterizations. This manuscript describes the implementation of the CAM5 physics suite in WRF-Chem provides an overview of the modeling framework and an initial evaluation of the simulated meteorology, clouds, and aerosols, and quantifies the resolution dependence of the cloud and aerosol parameterizations. We demonstrate that some of the CAM5 biases, such as high estimates of cloud susceptibility to aerosols and the underestimation of aerosol concentrations in the Arctic, can be reduced simply by increasing horizontal resolution. We also show that the CAM5 physics suite performs similarly to a set of parameterizations commonly used in WRF-Chem, but produces higher ice and liquid water condensate amounts and near-surface black carbon concentration. Further evaluations that use other mesoscale model parameterizations and perform other case studies are needed to infer whether one parameterization consistently produces results more consistent with observations.« less

  4. Explicit and Implicit Stigma of Mental Illness as Predictors of the Recovery Attitudes of Assertive Community Treatment Practitioners.

    PubMed

    Stull, Laura G; McConnell, Haley; McGrew, John; Salyers, Michelle P

    2017-01-01

    While explicit negative stereotypes of mental illness are well established as barriers to recovery, implicit attitudes also may negatively impact outcomes. The current study is unique in its focus on both explicit and implicit stigma as predictors of recovery attitudes of mental health practitioners. Assertive Community Treatment practitioners (n = 154) from 55 teams completed online measures of stigma, recovery attitudes, and an Implicit Association Test (IAT). Three of four explicit stigma variables (perceptions of blameworthiness, helplessness, and dangerousness) and all three implicit stigma variables were associated with lower recovery attitudes. In a multivariate, hierarchical model, however, implicit stigma did not explain additional variance in recovery attitudes. In the overall model, perceptions of dangerousness and implicitly associating mental illness with "bad" were significant individual predictors of lower recovery attitudes. The current study demonstrates a need for interventions to lower explicit stigma, particularly perceptions of dangerousness, to increase mental health providers' expectations for recovery. The extent to which implicit and explicit stigma differentially predict outcomes, including recovery attitudes, needs further research.

  5. SENSITIVITY OF THE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION MULTILAYER MODEL TO INSTRUMENT ERROR AND PARAMETERIZATION UNCERTAINTY

    EPA Science Inventory

    The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...

  6. Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil

    USDA-ARS?s Scientific Manuscript database

    The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...

  7. The effects of surface evaporation parameterizations on climate sensitivity to solar constant variations

    NASA Technical Reports Server (NTRS)

    Chou, S.-H.; Curran, R. J.; Ohring, G.

    1981-01-01

    The effects of two different evaporation parameterizations on the sensitivity of simulated climate to solar constant variations are investigated by using a zonally averaged climate model. One parameterization is a nonlinear formulation in which the evaporation is nonlinearly proportional to the sensible heat flux, with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface (model A). The other is the formulation of Saltzman (1968) with the evaporation linearly proportional to the sensible heat flux (model B). The computed climates of models A and B are in good agreement except for the energy partition between sensible and latent heat at the earth's surface. The difference in evaporation parameterizations causes a difference in the response of temperature lapse rate to solar constant variations and a difference in the sensitivity of longwave radiation to surface temperature which leads to a smaller sensitivity of surface temperature to solar constant variations in model A than in model B. The results of model A are qualitatively in agreement with those of the general circulation model calculations of Wetherald and Manabe (1975).

  8. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    NASA Astrophysics Data System (ADS)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  9. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    NASA Astrophysics Data System (ADS)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  10. The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel

    2017-03-01

    A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.

  11. The Secondary Organic Aerosol Processor (SOAP v1.0) model: a unified model with different ranges of complexity based on the molecular surrogate approach

    NASA Astrophysics Data System (ADS)

    Couvidat, F.; Sartelet, K.

    2015-04-01

    In this paper the Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model determines the partitioning of organic compounds between the gas and particle phases. It is designed to be modular with different user options depending on the computation time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption into the aqueous phase of particles, activity coefficients and phase separation). Each surrogate can be hydrophilic (condenses only into the aqueous phase of particles), hydrophobic (condenses only into the organic phases of particles) or both (condenses into both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC (UNIversal Functional group Activity Coefficient; Fredenslund et al., 1975) thermodynamic model for short-range interactions and with the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) parameterization for medium- and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium representation and a dynamic representation of organic aerosols (OAs). In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol is not at equilibrium with the gas phase because the organic phases could be semi-solid (very viscous liquid phase). The condensation-evaporation of organic compounds could then be limited by the diffusion in the organic phases due to the high viscosity. An implicit dynamic representation of secondary organic aerosols (SOAs) is available in SOAP with OAs divided into layers, the first layer being at the center of the particle (slowly reaches equilibrium) and the final layer being near the interface with the gas phase (quickly reaches equilibrium). Although this dynamic implicit representation is a simplified approach to model condensation-evaporation with a low number of layers and short CPU (central processing unit) time, it shows good agreements with an explicit representation of condensation-evaporation (no significant differences after a few hours of condensation).

  12. Maximally localized Wannier functions in LaMnO3 within PBE + U, hybrid functionals and partially self-consistent GW: an efficient route to construct ab initio tight-binding parameters for eg perovskites.

    PubMed

    Franchini, C; Kováčik, R; Marsman, M; Murthy, S Sathyanarayana; He, J; Ederer, C; Kresse, G

    2012-06-13

    Using the newly developed VASP2WANNIER90 interface we have constructed maximally localized Wannier functions (MLWFs) for the e(g) states of the prototypical Jahn-Teller magnetic perovskite LaMnO(3) at different levels of approximation for the exchange-correlation kernel. These include conventional density functional theory (DFT) with and without the additional on-site Hubbard U term, hybrid DFT and partially self-consistent GW. By suitably mapping the MLWFs onto an effective e(g) tight-binding (TB) Hamiltonian we have computed a complete set of TB parameters which should serve as guidance for more elaborate treatments of correlation effects in effective Hamiltonian-based approaches. The method-dependent changes of the calculated TB parameters and their interplay with the electron-electron (el-el) interaction term are discussed and interpreted. We discuss two alternative model parameterizations: one in which the effects of the el-el interaction are implicitly incorporated in the otherwise 'noninteracting' TB parameters and a second where we include an explicit mean-field el-el interaction term in the TB Hamiltonian. Both models yield a set of tabulated TB parameters which provide the band dispersion in excellent agreement with the underlying ab initio and MLWF bands.

  13. Impact of APEX parameterization and soil data on runoff, sediment, and nutrients transport assessment

    USDA-ARS?s Scientific Manuscript database

    Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...

  14. Parameterization guidelines and considerations for hydrologic models

    USDA-ARS?s Scientific Manuscript database

    Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...

  15. Parameterization of norfolk sandy loam properties for stochastic modeling of light in-wheel motor UGV

    USDA-ARS?s Scientific Manuscript database

    To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...

  16. Summary of Cumulus Parameterization Workshop

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh

    2002-01-01

    A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.

  17. The Role of Implicit Negative Feedback in SLA: Models and Recasts in Japanese and Spanish.

    ERIC Educational Resources Information Center

    Long, Michael; Inagaki, Shunji; Ortega, Lourdes

    1998-01-01

    Two experiments were conducted to assess relative utility of models and recasts in second-language (L2) Japanese and Spanish. Using pretest, posttest, control group design, each study provided evidence of adults' ability to learn from implicit negative feedback; in one case, support for notion that reactive implicit negative feedback can be more…

  18. Applications of Dweck's Model of Implicit Theories to Teachers' Self-Efficacy and Emotional Experiences

    ERIC Educational Resources Information Center

    Williams, Alexis Ymon

    2012-01-01

    The current study explored Dweck's (1999; Dweck & Leggett, 1988) model of implicit theories in the context of teaching in order to establish its usefulness for describing teachers' beliefs about students' ability and social behavior. Further it sought to explain the connections between teachers' implicit beliefs and their…

  19. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-10-20

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  20. Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate

    NASA Technical Reports Server (NTRS)

    Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.

    2001-01-01

    The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.

  1. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  2. Parameterized reduced order models from a single mesh using hyper-dual numbers

    NASA Astrophysics Data System (ADS)

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  3. Amplification of intrinsic emittance due to rough metal cathodes: Formulation of a parameterization model

    NASA Astrophysics Data System (ADS)

    Charles, T. K.; Paganin, D. M.; Dowd, R. T.

    2016-08-01

    Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.

  4. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  5. Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications

    NASA Astrophysics Data System (ADS)

    Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.

    2017-12-01

    Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .

  6. Trends and fluctuations in the severity of interstate wars

    PubMed Central

    Clauset, Aaron

    2018-01-01

    Since 1945, there have been relatively few large interstate wars, especially compared to the preceding 30 years, which included both World Wars. This pattern, sometimes called the long peace, is highly controversial. Does it represent an enduring trend caused by a genuine change in the underlying conflict-generating processes? Or is it consistent with a highly variable but otherwise stable system of conflict? Using the empirical distributions of interstate war sizes and onset times from 1823 to 2003, we parameterize stationary models of conflict generation that can distinguish trends from statistical fluctuations in the statistics of war. These models indicate that both the long peace and the period of great violence that preceded it are not statistically uncommon patterns in realistic but stationary conflict time series. This fact does not detract from the importance of the long peace or the proposed mechanisms that explain it. However, the models indicate that the postwar pattern of peace would need to endure at least another 100 to 140 years to become a statistically significant trend. This fact places an implicit upper bound on the magnitude of any change in the true likelihood of a large war after the end of the Second World War. The historical patterns of war thus seem to imply that the long peace may be substantially more fragile than proponents believe, despite recent efforts to identify mechanisms that reduce the likelihood of interstate wars. PMID:29507877

  7. Shortwave radiation parameterization scheme for subgrid topography

    NASA Astrophysics Data System (ADS)

    Helbig, N.; LöWe, H.

    2012-02-01

    Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.

  8. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    NASA Astrophysics Data System (ADS)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  9. Comprehensive assessment of parameterization methods for estimating clear-sky surface downward longwave radiation

    NASA Astrophysics Data System (ADS)

    Guo, Yamin; Cheng, Jie; Liang, Shunlin

    2018-02-01

    Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.

  10. A scheme for parameterizing ice cloud water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  11. KECSA-Movable Type Implicit Solvation Model (KMTISM)

    PubMed Central

    2015-01-01

    Computation of the solvation free energy for chemical and biological processes has long been of significant interest. The key challenges to effective solvation modeling center on the choice of potential function and configurational sampling. Herein, an energy sampling approach termed the “Movable Type” (MT) method, and a statistical energy function for solvation modeling, “Knowledge-based and Empirical Combined Scoring Algorithm” (KECSA) are developed and utilized to create an implicit solvation model: KECSA-Movable Type Implicit Solvation Model (KMTISM) suitable for the study of chemical and biological systems. KMTISM is an implicit solvation model, but the MT method performs energy sampling at the atom pairwise level. For a specific molecular system, the MT method collects energies from prebuilt databases for the requisite atom pairs at all relevant distance ranges, which by its very construction encodes all possible molecular configurations simultaneously. Unlike traditional statistical energy functions, KECSA converts structural statistical information into categorized atom pairwise interaction energies as a function of the radial distance instead of a mean force energy function. Within the implicit solvent model approximation, aqueous solvation free energies are then obtained from the NVT ensemble partition function generated by the MT method. Validation is performed against several subsets selected from the Minnesota Solvation Database v2012. Results are compared with several solvation free energy calculation methods, including a one-to-one comparison against two commonly used classical implicit solvation models: MM-GBSA and MM-PBSA. Comparison against a quantum mechanics based polarizable continuum model is also discussed (Cramer and Truhlar’s Solvation Model 12). PMID:25691832

  12. Accuracy of parameterized proton range models; A comparison

    NASA Astrophysics Data System (ADS)

    Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.

    2018-03-01

    An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.

  13. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  14. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Optimal implicit 2-D finite differences to model wave propagation in poroelastic media

    NASA Astrophysics Data System (ADS)

    Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2016-08-01

    Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.

  16. Subgrid-scale parameterization and low-frequency variability: a response theory approach

    NASA Astrophysics Data System (ADS)

    Demaeyer, Jonathan; Vannitsem, Stéphane

    2016-04-01

    Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.

  17. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    NASA Astrophysics Data System (ADS)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  18. Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.

    2015-04-18

    With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.

  19. A Dual-Process Approach to the Role of Mother's Implicit and Explicit Attitudes toward Their Child in Parenting Models

    ERIC Educational Resources Information Center

    Sturge-Apple, Melissa L.; Rogge, Ronald D.; Skibo, Michael A.; Peltz, Jack S.; Suor, Jennifer H.

    2015-01-01

    Extending dual process frameworks of cognition to a novel domain, the present study examined how mothers' explicit and implicit attitudes about her child may operate in models of parenting. To assess implicit attitudes, two separate studies were conducted using the same child-focused Go/No-go Association Task (GNAT-Child). In Study 1, model…

  20. Investigating the predictive validity of implicit and explicit measures of motivation on condom use, physical activity and healthy eating.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2012-01-01

    The literature on health-related behaviours and motivation is replete with research involving explicit processes and their relations with intentions and behaviour. Recently, interest has been focused on the impact of implicit processes and measures on health-related behaviours. Dual-systems models have been proposed to provide a framework for understanding the effects of explicit or deliberative and implicit or impulsive processes on health behaviours. Informed by a dual-systems approach and self-determination theory, the aim of this study was to test the effects of implicit and explicit motivation on three health-related behaviours in a sample of undergraduate students (N = 162). Implicit motives were hypothesised to predict behaviour independent of intentions while explicit motives would be mediated by intentions. Regression analyses indicated that implicit motivation predicted physical activity behaviour only. Across all behaviours, intention mediated the effects of explicit motivational variables from self-determination theory. This study provides limited support for dual-systems models and the role of implicit motivation in the prediction of health-related behaviour. Suggestions for future research into the role of implicit processes in motivation are outlined.

  1. High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006

    DTIC Science & Technology

    2006-08-01

    snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models

  2. Develop and Test Coupled Physical Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM

    DTIC Science & Technology

    2013-09-30

    Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM W. Erick Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529...Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  3. The explicit and implicit dance in psychoanalytic change.

    PubMed

    Fosshage, James L

    2004-02-01

    How the implicit/non-declarative and explicit/declarative cognitive domains interact is centrally important in the consideration of effecting change within the psychoanalytic arena. Stern et al. (1998) declare that long-lasting change occurs in the domain of implicit relational knowledge. In the view of this author, the implicit and explicit domains are intricately intertwined in an interactive dance within a psychoanalytic process. The author views that a spirit of inquiry (Lichtenberg, Lachmann & Fosshage 2002) serves as the foundation of the psychoanalytic process. Analyst and patient strive to explore, understand and communicate and, thereby, create a 'spirit' of interaction that contributes, through gradual incremental learning, to new implicit relational knowledge. This spirit, as part of the implicit relational interaction, is a cornerstone of the analytic relationship. The 'inquiry' more directly brings explicit/declarative processing to the foreground in the joint attempt to explore and understand. The spirit of inquiry in the psychoanalytic arena highlights both the autobiographical scenarios of the explicit memory system and the mental models of the implicit memory system as each contributes to a sense of self, other, and self with other. This process facilitates the extrication and suspension of the old models, so that new models based on current relational experience can be gradually integrated into both memory systems for lasting change.

  4. Assessment of implicit health attitudes: a multitrait-multimethod approach and a comparison between patients with hypochondriasis and patients with anxiety disorders.

    PubMed

    Weck, Florian; Höfling, Volkmar

    2015-01-01

    Two adaptations of the Implicit Association Task were used to assess implicit anxiety (IAT-Anxiety) and implicit health attitudes (IAT-Hypochondriasis) in patients with hypochondriasis (n = 58) and anxiety patients (n = 71). Explicit anxieties and health attitudes were assessed using questionnaires. The analysis of several multitrait-multimethod models indicated that the low correlation between explicit and implicit measures of health attitudes is due to the substantial methodological differences between the IAT and the self-report questionnaire. Patients with hypochondriasis displayed significantly more dysfunctional explicit and implicit health attitudes than anxiety patients, but no differences were found regarding explicit and implicit anxieties. The study demonstrates the specificity of explicit and implicit dysfunctional health attitudes among patients with hypochondriasis.

  5. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.

  6. Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.

    2013-02-01

    In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Hannah C.; Houze, Robert A.

    To equitably compare the spatial pattern of ice microphysical processes produced by three microphysical parameterizations with each other, observations, and theory, simulations of tropical oceanic mesoscale convective systems (MCSs) in the Weather Research and Forecasting (WRF) model were forced to develop the same mesoscale circulations as observations by assimilating radial velocity data from a Doppler radar. The same general layering of microphysical processes was found in observations and simulations with deposition anywhere above the 0°C level, aggregation at and above the 0°C level, melting at and below the 0°C level, and riming near the 0°C level. Thus, this study ismore » consistent with the layered ice microphysical pattern portrayed in previous conceptual models and indicated by dual-polarization radar data. Spatial variability of riming in the simulations suggests that riming in the midlevel inflow is related to convective-scale vertical velocity perturbations. Finally, this study sheds light on limitations of current generally available bulk microphysical parameterizations. In each parameterization, the layers in which aggregation and riming took place were generally too thick and the frequency of riming was generally too high compared to the observations and theory. Additionally, none of the parameterizations produced similar details in every microphysical spatial pattern. Discrepancies in the patterns of microphysical processes between parameterizations likely factor into creating substantial differences in model reflectivity patterns. It is concluded that improved parameterizations of ice-phase microphysics will be essential to obtain reliable, consistent model simulations of tropical oceanic MCSs.« less

  8. Uncertainty and feasibility of dynamical downscaling for modeling tropical cyclones for storm surge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Taraphdar, Sourav; Wang, Taiping

    This paper presents a modeling study conducted to evaluate the uncertainty of a regional model in simulating hurricane wind and pressure fields, and the feasibility of driving coastal storm surge simulation using an ensemble of region model outputs produced by 18 combinations of three convection schemes and six microphysics parameterizations, using Hurricane Katrina as a test case. Simulated wind and pressure fields were compared to observed H*Wind data for Hurricane Katrina and simulated storm surge was compared to observed high-water marks on the northern coast of the Gulf of Mexico. The ensemble modeling analysis demonstrated that the regional model wasmore » able to reproduce the characteristics of Hurricane Katrina with reasonable accuracy and can be used to drive the coastal ocean model for simulating coastal storm surge. Results indicated that the regional model is sensitive to both convection and microphysics parameterizations that simulate moist processes closely linked to the tropical cyclone dynamics that influence hurricane development and intensification. The Zhang and McFarlane (ZM) convection scheme and the Lim and Hong (WDM6) microphysics parameterization are the most skillful in simulating Hurricane Katrina maximum wind speed and central pressure, among the three convection and the six microphysics parameterizations. Error statistics of simulated maximum water levels were calculated for a baseline simulation with H*Wind forcing and the 18 ensemble simulations driven by the regional model outputs. The storm surge model produced the overall best results in simulating the maximum water levels using wind and pressure fields generated with the ZM convection scheme and the WDM6 microphysics parameterization.« less

  9. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-01-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present parameterization gives an interesting alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.

  10. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  11. The application of depletion curves for parameterization of subgrid variability of snow

    Treesearch

    C. H. Luce; D. G. Tarboton

    2004-01-01

    Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...

  12. A review of recent research on improvement of physical parameterizations in the GLA GCM

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.

    1990-01-01

    A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.

  13. Exploring the potential of machine learning to break deadlock in convection parameterization

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  14. A subdivision-based parametric deformable model for surface extraction and statistical shape modeling of the knee cartilages

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2006-03-01

    Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.

  15. Perspectives on using implicit type constitutive relations in the modelling of the behaviour of non-Newtonian fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janečka, Adam, E-mail: janecka@karlin.mff.cuni.cz; Průša, Vít, E-mail: prusv@karlin.mff.cuni.cz

    2015-04-28

    We discuss the benefits of using the so-called implicit type constitutive relations introduced by K. R. Rajagopal, J. Fluid Mech. 550, 243-249 (2006) and K. R. Rajagopal, Appl. Math. 48, 279-319 (2003) in the description of the behaviour of non-Newtonian fluids. In particular, we focus on the benefits of using the implicit type constitutive relations in the mathematical modelling of fluids in which the shear stress/shear rate dependence is given by an S-shaped curve, and in modelling of fluids that exhibit nonzero normal stress differences. We also discuss a thermodynamical framework that allows one to cope with the implicit typemore » constitutive relations.« less

  16. Generalized ocean color inversion model for retrieving marine inherent optical properties.

    PubMed

    Werdell, P Jeremy; Franz, Bryan A; Bailey, Sean W; Feldman, Gene C; Boss, Emmanuel; Brando, Vittorio E; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J; Lee, ZhongPing; Loisel, Hubert; Maritorena, Stéphane; Mélin, Fréderic; Moore, Timothy S; Smyth, Timothy J; Antoine, David; Devred, Emmanuel; d'Andon, Odile Hembise Fanton; Mangin, Antoine

    2013-04-01

    Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.

  17. Generalized Ocean Color Inversion Model for Retrieving Marine Inherent Optical Properties

    NASA Technical Reports Server (NTRS)

    Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.; Feldman, Gene C.; Boss, Emmanuel; Brando, Vittorio E.; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J.; Lee, ZhongPing; hide

    2013-01-01

    Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future ensemble applications.

  18. Parameterization Interactions in Global Aquaplanet Simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.

    2018-02-01

    Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.

  19. Brain Surface Conformal Parameterization Using Riemann Surface Structure

    PubMed Central

    Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung

    2011-01-01

    In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336

  20. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  1. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  2. Radar and microphysical characteristics of convective storms simulated from a numerical model using a new microphysical parameterization

    NASA Technical Reports Server (NTRS)

    Ferrier, Brad S.; Tao, Wei-Kuo; Simpson, Joanne

    1991-01-01

    The basic features of a new and improved bulk-microphysical parameterization capable of simulating the hydrometeor structure of convective systems in all types of large-scale environments (with minimal adjustment of coefficients) are studied. Reflectivities simulated from the model are compared with radar observations of an intense midlatitude convective system. Simulated reflectivities using the novel four-class ice scheme with a microphysical parameterization rain distribution at 105 min are illustrated. Preliminary results indicate that this new ice scheme works efficiently in simulating midlatitude continental storms.

  3. Comparison of different objective functions for parameterization of simple respiration models

    Treesearch

    M.T. van Wijk; B. van Putten; D.Y. Hollinger; A.D. Richardson

    2008-01-01

    The eddy covariance measurements of carbon dioxide fluxes collected around the world offer a rich source for detailed data analysis. Simple, aggregated models are attractive tools for gap filling, budget calculation, and upscaling in space and time. Key in the application of these models is their parameterization and a robust estimate of the uncertainty and reliability...

  4. Implicit and explicit attitudes towards conventional and complementary and alternative medicine treatments: Introduction of an Implicit Association Test.

    PubMed

    Green, James A; Hohmann, Cynthia; Lister, Kelsi; Albertyn, Riani; Bradshaw, Renee; Johnson, Christine

    2016-06-01

    This study examined associations between anticipated future health behaviour and participants' attitudes. Three Implicit Association Tests were developed to assess safety, efficacy and overall attitude. They were used to examine preference associations between conventional versus complementary and alternative medicine among 186 participants. A structural equation model suggested only a single implicit association, rather than three separate domains. However, this single implicit association predicted additional variance in anticipated future use of complementary and alternative medicine beyond explicit. Implicit measures should give further insight into motivation for complementary and alternative medicine use. © The Author(s) 2014.

  5. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  6. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  7. Climate and the equilibrium state of land surface hydrology parameterizations

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.

  8. Age effects on explicit and implicit memory

    PubMed Central

    Ward, Emma V.; Berry, Christopher J.; Shanks, David R.

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942

  9. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning.

    PubMed

    McDougle, Samuel D; Bond, Krista M; Taylor, Jordan A

    2015-07-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. Copyright © 2015 the authors 0270-6474/15/359568-12$15.00/0.

  10. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning

    PubMed Central

    Bond, Krista M.; Taylor, Jordan A.

    2015-01-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. PMID:26134640

  11. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  12. Evaluation of Extratropical Cyclone Precipitation in the North Atlantic Basin: An analysis of ERA-Interim, WRF, and two CMIP5 models.

    PubMed

    Booth, James F; Naud, Catherine M; Willison, Jeff

    2018-03-01

    The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.

  13. A cloudy planetary boundary layer oscillation arising from the coupling of turbulence with precipitation in climate simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, X.; Klein, S. A.; Ma, H. -Y.

    The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less

  14. A cloudy planetary boundary layer oscillation arising from the coupling of turbulence with precipitation in climate simulations

    DOE PAGES

    Zheng, X.; Klein, S. A.; Ma, H. -Y.; ...

    2017-08-24

    The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less

  15. Medical School Experiences Associated with Change in Implicit Racial Bias Among 3547 Students: A Medical Student CHANGES Study Report.

    PubMed

    van Ryn, Michelle; Hardeman, Rachel; Phelan, Sean M; Burgess, Diana J; Dovidio, John F; Herrin, Jeph; Burke, Sara E; Nelson, David B; Perry, Sylvia; Yeazel, Mark; Przedworski, Julia M

    2015-12-01

    Physician implicit (unconscious, automatic) bias has been shown to contribute to racial disparities in medical care. The impact of medical education on implicit racial bias is unknown. To examine the association between change in student implicit racial bias towards African Americans and student reports on their experiences with 1) formal curricula related to disparities in health and health care, cultural competence, and/or minority health; 2) informal curricula including racial climate and role model behavior; and 3) the amount and favorability of interracial contact during school. Prospective observational study involving Web-based questionnaires administered during first (2010) and last (2014) semesters of medical school. A total of 3547 students from a stratified random sample of 49 U.S. medical schools. Change in implicit racial attitudes as assessed by the Black-White Implicit Association Test administered during the first semester and again during the last semester of medical school. In multivariable modeling, having completed the Black-White Implicit Association Test during medical school remained a statistically significant predictor of decreased implicit racial bias (-5.34, p ≤ 0.001: mixed effects regression with random intercept across schools). Students' self-assessed skills regarding providing care to African American patients had a borderline association with decreased implicit racial bias (-2.18, p = 0.056). Having heard negative comments from attending physicians or residents about African American patients (3.17, p = 0.026) and having had unfavorable vs. very favorable contact with African American physicians (18.79, p = 0.003) were statistically significant predictors of increased implicit racial bias. Medical school experiences in all three domains were independently associated with change in student implicit racial attitudes. These findings are notable given that even small differences in implicit racial attitudes have been shown to affect behavior and that implicit attitudes are developed over a long period of repeated exposure and are difficult to change.

  16. Modeling particle nucleation and growth over northern California during the 2010 CARES campaign

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.

    2015-11-01

    Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.

  17. Components of Implicit Stigma against Mental Illness among Chinese Students

    PubMed Central

    Wang, Xiaogang; Huang, Xiting; Jackson, Todd; Chen, Ruijun

    2012-01-01

    Although some research has examined negative automatic aspects of attitudes toward mental illness via relatively indirect measures among Western samples, it is unclear whether negative attitudes can be automatically activated in individuals from non-Western countries. This study attempted to validate results from Western samples with Chinese college students. We first examined the three-component model of implicit stigma (negative cognition, negative affect, and discriminatory tendencies) toward mental illness with the Single Category Implicit Association Test (SC-IAT). We also explored the relationship between explicit and implicit stigma among 56 Chinese university college students. In the three separate SC-IATs and the combined SC-IAT, automatic associations between mental illness and negative descriptors were stronger relative to those with positive descriptors and the implicit effect of cognitive and affective SC-IATs were significant. Explicit and implicit measures of stigma toward mental illness were unrelated. In our sample, women's overall attitudes toward mental illness were more negative than men's were, but no gender differences were found for explicit measures. These findings suggested that implicit stigma toward mental illness exists in Chinese students, and provide some support for the three-component model of implicit stigma toward mental illness. Future studies that focus on automatic components of stigmatization and stigma-reduction in China are warranted. PMID:23029366

  18. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  19. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  20. Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model

    NASA Astrophysics Data System (ADS)

    Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.

    2017-10-01

    We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.

  1. Conditioning 3D object-based models to dense well data

    NASA Astrophysics Data System (ADS)

    Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.

    2018-06-01

    Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.

  2. Sulfur Solubility In Silicate Melts: A Thermochemical Model

    NASA Astrophysics Data System (ADS)

    Moretti, R.; Ottonello, G.

    A termochemical model for calculating sulfur solubility of simple and complex silicate melts has been developed in the framework of the Toop-Samis polymeric approach combined with a Flood - Grjotheim theoretical treatment of silicate slags [1,2]. The model allows one to compute sulfide and sulfate content of silicate melts whenever fugacity of gaseous sulphur is provided. "Electrically equivalent ion fractions" are needed to weigh the contribution of the various disproportion reactions of the type: MOmelt + 1/2S2 ,gas MSmelt+1/2O2 ,gas (1) MOmelt + 1/2S2 ,gas + 3/2O2 ,gas MSO4 ,melt (2) Eqs. 1 and 2 account for the oxide-sulfide and the oxide-sulfate disproportiona- tion in silicate melt. Electrically equivalent ion fractions are computed, in a fused salt Temkin notation, over the appropriate matrixes (anionic and cationic). The extension of such matrixes is calculated in the framework of a polymeric model previously developed [1,2,3] and based on a parameterization of acid-base properties of melts. No adjustable parameters are used and model activities follow the raoultian behavior implicit in the ion matrix solution of the Temkin notation. The model is based on a huge amount of data available in literature and displays a high heuristic capability with virtually no compositional limits, as long as the structural role assigned to each oxide holds. REFERENCES: [1] Ottonello G., Moretti R., Marini L. and Vetuschi Zuccolini M. (2001), Chem. Geol., 174, 157-179. [2] Moretti R. (2002) PhD Thesis, University of Pisa. [3] Ottonello G. (2001) J. Non-Cryst. Solids, 282, 72-85.

  3. Microbial processes in marine ecosystem models: state of the art and future prospective

    NASA Astrophysics Data System (ADS)

    Polimene, L.; Butenschon, M.; Blackford, J.; Allen, I.

    2012-12-01

    Heterotrophic bacteria play a key role in the marine biogeochemistry being the main consumer of dissolved organic matter (DOM) and the main producer of carbon dioxide (CO2) by respiration. Quantifying the carbon and energy fluxes within bacteria (i.e. production, respiration, overflow metabolism etc.) is therefore crucial for the assessment of the global ocean carbon and nutrient cycles. Consequently, the description of bacteria dynamic in ecosystem models is a key (although challenging) issue which cannot be overlooked if we want to properly simulate the marine environment. We present an overview of the microbial processes described in the European Sea Regional Ecosystem Model (ERSEM), a state of the art biogeochemical model resolving carbon and nutrient cycles (N, P, Si and Fe) within the low trophic levels (up to mesozooplankton) of the marine ecosystem. The description of the theoretical assumptions and philosophy underpinning the ERSEM bacteria sub-model will be followed by the presentation of some case studies highlighting the relevance of resolving microbial processes in the simulation of ecosystem dynamics at a local scale. Recent results concerning the implementation of ERSEM on a global ocean domain will be also presented. This latter exercise includes a comparison between simulations carried out with the full bacteria sub-model and simulations carried out with an implicit parameterization of bacterial activity. The results strongly underline the importance of explicitly resolved bacteria in the simulation of global carbon fluxes. Finally, a summary of the future developments along with issues still open on the topic will be presented and discussed.

  4. ARM - Midlatitude Continental Convective Clouds

    DOE Data Explorer

    Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos

    2012-01-19

    Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.

  5. ARM - Midlatitude Continental Convective Clouds (comstock-hvps)

    DOE Data Explorer

    Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos

    2012-01-06

    Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.

  6. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    PubMed Central

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2014-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986

  7. A method for coupling a parameterization of the planetary boundary layer with a hydrologic model

    NASA Technical Reports Server (NTRS)

    Lin, J. D.; Sun, Shu Fen

    1986-01-01

    Deardorff's parameterization of the planetary boundary layer is adapted to drive a hydrologic model. The method converts the atmospheric conditions measured at the anemometer height at one site to the mean values in the planetary boundary layer; it then uses the planetary boundary layer parameterization and the hydrologic variables to calculate the fluxes of momentum, heat and moisture at the atmosphere-land interface for a different site. A simplified hydrologic model is used for a simulation study of soil moisture and ground temperature on three different land surface covers. The results indicate that this method can be used to drive a spatially distributed hydrologic model by using observed data available at a meteorological station located on or nearby the site.

  8. Mechanisms controlling primary and new production in a global ecosystem model - Part I: Validation of the biological simulation

    NASA Astrophysics Data System (ADS)

    Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.

    2006-12-01

    A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The "K profile parameterization" (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JGOFS time series sites: BATS, KERFIX, Papa and HOT. One exception is the northern North Atlantic where lower grazing rates are needed, perhaps related to the dominance of mesozooplankton there. The model is therefore not globally robust in the sense that additional parameterizations are needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem models.

  9. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models.

    PubMed

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation.

  10. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  11. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-08-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  12. Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.

    PubMed

    Cunningham, William A; Nezlek, John B; Banaji, Mahzarin R

    2004-10-01

    Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism.

  13. Self-Regulation and Implicit Attitudes Toward Physical Activity Influence Exercise Behavior.

    PubMed

    Padin, Avelina C; Emery, Charles F; Vasey, Michael; Kiecolt-Glaser, Janice K

    2017-08-01

    Dual-process models of health behavior posit that implicit and explicit attitudes independently drive healthy behaviors. Prior evidence indicates that implicit attitudes may be related to weekly physical activity (PA) levels, but the extent to which self-regulation attenuates this link remains unknown. This study examined the associations between implicit attitudes and self-reported PA during leisure time among 150 highly active young adults and evaluated the extent to which effortful control (one aspect of self-regulation) moderated this relationship. Results indicated that implicit attitudes toward exercise were unrelated to average workout length among individuals with higher effortful control. However, those with lower effortful control and more negative implicit attitudes reported shorter average exercise sessions compared with those with more positive attitudes. Implicit and explicit attitudes were unrelated to total weekly PA. A combination of poorer self-regulation and negative implicit attitudes may leave individuals vulnerable to mental and physical health consequences of low PA.

  14. a Speculative Study on Negative-Dimensional Potential and Wave Problems by Implicit Calculus Modeling Approach

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Wang, Fajie

    Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.

  15. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2011-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.

  16. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2012-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.

  17. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  18. Vertical structure of mean cross-shore currents across a barred surf zone

    USGS Publications Warehouse

    Haines, John W.; Sallenger, Asbury H.

    1994-01-01

    Mean cross-shore currents observed across a barred surf zone are compared to model predictions. The model is based on a simplified momentum balance with a turbulent boundary layer at the bed. Turbulent exchange is parameterized by an eddy viscosity formulation, with the eddy viscosity Aυ independent of time and the vertical coordinate. Mean currents result from gradients due to wave breaking and shoaling, and the presence of a mean setup of the free surface. Descriptions of the wave field are provided by the wave transformation model of Thornton and Guza [1983]. The wave transformation model adequately reproduces the observed wave heights across the surf zone. The mean current model successfully reproduces the observed cross-shore flows. Both observations and predictions show predominantly offshore flow with onshore flow restricted to a relatively thin surface layer. Successful application of the mean flow model requires an eddy viscosity which varies horizontally across the surf zone. Attempts are made to parameterize this variation with some success. The data does not discriminate between alternative parameterizations proposed. The overall variability in eddy viscosity suggested by the model fitting should be resolvable by field measurements of the turbulent stresses. Consistent shortcomings of the parameterizations, and the overall modeling effort, suggest avenues for further development and data collection.

  19. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L.; Konikow, Leonard F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  20. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.

  1. Implicit and explicit self-esteem and their reciprocal relationship with symptoms of depression and social anxiety: a longitudinal study in adolescents.

    PubMed

    van Tuijl, Lonneke A; de Jong, Peter J; Sportel, B Esther; de Hullu, Eva; Nauta, Maaike H

    2014-03-01

    A negative self-view is a prominent factor in most cognitive vulnerability models of depression and anxiety. Recently, there has been increased attention to differentiate between the implicit (automatic) and the explicit (reflective) processing of self-related evaluations. This longitudinal study aimed to test the association between implicit and explicit self-esteem and symptoms of adolescent depression and social anxiety disorder. Two complementary models were tested: the vulnerability model and the scarring effect model. Participants were 1641 first and second year pupils of secondary schools in the Netherlands. The Rosenberg Self-Esteem Scale, self-esteem Implicit Association Test and Revised Child Anxiety and Depression Scale were completed to measure explicit self-esteem, implicit self-esteem and symptoms of social anxiety disorder (SAD) and major depressive disorder (MDD), respectively, at baseline and two-year follow-up. Explicit self-esteem at baseline was associated with symptoms of MDD and SAD at follow-up. Symptomatology at baseline was not associated with explicit self-esteem at follow-up. Implicit self-esteem was not associated with symptoms of MDD or SAD in either direction. We relied on self-report measures of MDD and SAD symptomatology. Also, findings are based on a non-clinical sample. Our findings support the vulnerability model, and not the scarring effect model. The implications of these findings suggest support of an explicit self-esteem intervention to prevent increases in MDD and SAD symptomatology in non-clinical adolescents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  3. Simple liquid models with corrected dielectric constants

    PubMed Central

    Fennell, Christopher J.; Li, Libo; Dill, Ken A.

    2012-01-01

    Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577

  4. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  5. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  6. Trends and uncertainties in budburst projections of Norway spruce in Northern Europe.

    PubMed

    Olsson, Cecilia; Olin, Stefan; Lindström, Johan; Jönsson, Anna Maria

    2017-12-01

    Budburst is regulated by temperature conditions, and a warming climate is associated with earlier budburst. A range of phenology models has been developed to assess climate change effects, and they tend to produce different results. This is mainly caused by different model representations of tree physiology processes, selection of observational data for model parameterization, and selection of climate model data to generate future projections. In this study, we applied (i) Bayesian inference to estimate model parameter values to address uncertainties associated with selection of observational data, (ii) selection of climate model data representative of a larger dataset, and (iii) ensembles modeling over multiple initial conditions, model classes, model parameterizations, and boundary conditions to generate future projections and uncertainty estimates. The ensemble projection indicated that the budburst of Norway spruce in northern Europe will on average take place 10.2 ± 3.7 days earlier in 2051-2080 than in 1971-2000, given climate conditions corresponding to RCP 8.5. Three provenances were assessed separately (one early and two late), and the projections indicated that the relationship among provenance will remain also in a warmer climate. Structurally complex models were more likely to fail predicting budburst for some combinations of site and year than simple models. However, they contributed to the overall picture of current understanding of climate impacts on tree phenology by capturing additional aspects of temperature response, for example, chilling. Model parameterizations based on single sites were more likely to result in model failure than parameterizations based on multiple sites, highlighting that the model parameterization is sensitive to initial conditions and may not perform well under other climate conditions, whether the change is due to a shift in space or over time. By addressing a range of uncertainties, this study showed that ensemble modeling provides a more robust impact assessment than would a single phenology model run.

  7. Parameterizing Coefficients of a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.

  8. A General Framework for Thermodynamically Consistent Parameterization and Efficient Sampling of Enzymatic Reactions

    PubMed Central

    Saa, Pedro; Nielsen, Lars K.

    2015-01-01

    Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556

  9. Changes in physiological attributes of ponderosa pine from seedling to mature tree

    Treesearch

    Nancy E. Grulke; William A. Retzlaff

    2001-01-01

    Plant physiological models are generally parameterized from many different sources of data, including chamber experiments and plantations, from seedlings to mature trees. We obtained a comprehensive data set for a natural stand of ponderosa pine (Pinus ponderosa Laws.) and used these data to parameterize the physiologically based model, TREGRO....

  10. A fast, parallel algorithm to solve the basic fluvial erosion/transport equations

    NASA Astrophysics Data System (ADS)

    Braun, J.

    2012-04-01

    Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.

  11. The parameterization of the planetary boundary layer in the UCLA general circulation model - Formulation and results

    NASA Technical Reports Server (NTRS)

    Suarez, M. J.; Arakawa, A.; Randall, D. A.

    1983-01-01

    A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.

  12. A basal stress parameterization for modeling landfast ice

    NASA Astrophysics Data System (ADS)

    Lemieux, Jean-François; Tremblay, L. Bruno; Dupont, Frédéric; Plante, Mathieu; Smith, Gregory C.; Dumont, Dany

    2015-04-01

    Current large-scale sea ice models represent very crudely or are unable to simulate the formation, maintenance and decay of coastal landfast ice. We present a simple landfast ice parameterization representing the effect of grounded ice keels. This parameterization is based on bathymetry data and the mean ice thickness in a grid cell. It is easy to implement and can be used for two-thickness and multithickness category models. Two free parameters are used to determine the critical thickness required for large ice keels to reach the bottom and to calculate the basal stress associated with the weight of the ridge above hydrostatic balance. A sensitivity study was conducted and demonstrates that the parameter associated with the critical thickness has the largest influence on the simulated landfast ice area. A 6 year (2001-2007) simulation with a 20 km resolution sea ice model was performed. The simulated landfast ice areas for regions off the coast of Siberia and for the Beaufort Sea were calculated and compared with data from the National Ice Center. With optimal parameters, the basal stress parameterization leads to a slightly shorter landfast ice season but overall provides a realistic seasonal cycle of the landfast ice area in the East Siberian, Laptev and Beaufort Seas. However, in the Kara Sea, where ice arches between islands are key to the stability of the landfast ice, the parameterization consistently leads to an underestimation of the landfast area.

  13. Fundamental statistical relationships between monthly and daily meteorological variables: Temporal downscaling of weather based on a global observational dataset

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp; Kaplan, Jed

    2016-04-01

    Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.

  14. Investigating the role of implicit prototypes in the prototype willingness model.

    PubMed

    Howell, Jennifer L; Ratliff, Kate A

    2017-06-01

    One useful theory to predict health behavior is the prototype-willingness model (PWM), which posits that people are more willing to engage in behavior to the extent that they have a positive view of the prototypical person who performs that behavior. The goal of the present research is to test whether adding an implicit measure of prototype favorability might improve explanatory power in the PWM. Two studies examined whether implicit prototype favorability uniquely predicted White women's intentions to engage in healthy sun behavior over the next 3-6 months, and their willingness to engage in risky sun behavior, should the opportunity arise. The results suggested that implicit prototype favorability, particularly implicit prototypes of those who engage in risky UV-related behaviors, uniquely predicted intentions to engage in healthy sun behavior and willingness to engage in risky sun behavior in the PWM.

  15. Moderators of the Relationship between Implicit and Explicit Evaluation

    PubMed Central

    Nosek, Brian A.

    2005-01-01

    Automatic and controlled modes of evaluation sometimes provide conflicting reports of the quality of social objects. This paper presents evidence for four moderators of the relationship between automatic (implicit) and controlled (explicit) evaluations. Implicit and explicit preferences were measured for a variety of object pairs using a large sample. The average correlation was r = .36, and 52 of the 57 object pairs showed a significant positive correlation. Results of multilevel modeling analyses suggested that: (a) implicit and explicit preferences are related, (b) the relationship varies as a function of the objects assessed, and (c) at least four variables moderate the relationship – self-presentation, evaluative strength, dimensionality, and distinctiveness. The variables moderated implicit-explicit correspondence across individuals and accounted for much of the observed variation across content domains. The resulting model of the relationship between automatic and controlled evaluative processes is grounded in personal experience with the targets of evaluation. PMID:16316292

  16. Biconvection flow of Carreau fluid over an upper paraboloid surface: A computational study

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Hussain, Arif; Malik, M. Y.; Salahuddin, T.

    Present article explored the physical characteristics of biconvection effects on the MHD flow of Carreau nanofluid over upper horizontal surface of paraboloid revolution along with chemical reaction. The concept of the Carreau nanofluid was introduced the new parameterization achieve the momentum governing equation. Using similarity transformed, the governing partial differential equations are converted into the ordinary differential equations. The obtained governing equations are solved computationally by using implicit finite difference method known as the Keller box technique. The numerical solutions are obtained for the velocity, temperature, concentration, friction factor, local heat and mass transfer coefficients by varying controlling parameters i.e. Biconvection parameter, fluid parameter, Weissenberg number, Hartmann number, Prandtl number, Brownian motion parameter, thermophoresis parameter, Lewis number and chemical reaction parameter. The obtained results are discussed via graphs and tables.

  17. Effects of a polar stratosphere cloud parameterization on ozone depletion due to stratospheric aircraft in a two-dimensional model

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Douglass, Anne R.; Jackman, Charles H.

    1994-01-01

    A parameterization of Type 1 and 2 polar stratospheric cloud (PSC) formation is presented which is appropriate for use in two-dimensional (2-D) photochemical models of the stratosphere. The calculations of PSC frequency of occurrence and surface area density uses climatological temperature probability distributions obtained from National Meteorological Center data to avoid using zonal mean temperatures, which are not good predictors of PSC behavior. The parameterization does not attempt to model the microphysics of PSCs. The parameterization predicts changes in PSC formation and heterogeneous processing due to perturbations of stratospheric trace constituents. It is therefore useful in assessing the potential effects of a fleet of stratospheric aircraft (high speed civil transports, or HSCTs) on stratospheric composition. the model calculated frequency of PSC occurrence agrees well with a climatology based on stratospheric aerosol measurement (SAM) 2 observations. PSCs are predicted to occur in the tropics. Their vertical range is narrow, however, and their impact on model O3 fields is small. When PSC and sulfate aerosol heterogeneous processes are included in the model calculations, the O3 change for 1980 - 1990 is in substantially better agreement with the total ozone mapping spectrometer (TOMS)-derived O3 trend than otherwise. The overall changes in model O3 response to standard HSCT perturbation scenarios produced by the parameterization are small and tend to decrease the model sensitivity to the HSCT perturbation. However, in the southern hemisphere spring a significant increase in O3 sensitivity to HSCT perturbations is found. At this location and time, increased PSC formation leads to increased levels of active chlorine, which produce the O3 decreases.

  18. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-05-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of ozone data, the present parameterization gives a valuable alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.

  19. On the Relationship between Observed NLDN Lightning ...

    EPA Pesticide Factsheets

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs

  20. Regional impacts of iron-light colimitation in a global biogeochemical model

    NASA Astrophysics Data System (ADS)

    Galbraith, E. D.; Gnanadesikan, A.; Dunne, J. P.; Hiscock, M. R.

    2009-07-01

    Laboratory and field studies have revealed that iron has multiple roles in phytoplankton physiology, with particular importance for light-harvesting cellular machinery. However, although iron-limitation is explicitly included in numerous biogeochemical/ecosystem models, its implementation varies, and its effect on the efficiency of light harvesting is often ignored. Given the complexity of the ocean environment, it is difficult to predict the consequences of applying different iron limitation schemes. Here we explore the interaction of iron and nutrient cycles using a new, streamlined model of ocean biogeochemistry. Building on previously published parameterizations of photoadaptation and export production, the Biogeochemistry with Light Iron Nutrients and Gasses (BLING) model is constructed with only three explicit tracers but including macronutrient and micronutrient limitation, light limitation, and an implicit treatment of community structure. The structural simplicity of this computationally inexpensive model allows us to clearly isolate the global effects of iron availability on maximum light-saturated photosynthesis rates from those of photosynthetic efficiency. We find that the effect on light-saturated photosynthesis rates is dominant, negating the importance of photosynthetic efficiency in most regions, especially the cold waters of the Southern Ocean. The primary exceptions to this occur in iron-rich regions of the Northern Hemisphere, where high light-saturated photosynthesis rates cause photosynthetic efficiency to play a more important role. Additionally, we speculate that the small phytoplankton dominating iron-limited regions tend to have relatively high photosynthetic efficiency, such that iron-limitation has less of a deleterious effect on growth rates than would be expected from short-term iron addition experiments.

  1. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    NASA Astrophysics Data System (ADS)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.

  2. Comparative study of transient hydraulic tomography with varying parameterizations and zonations: Laboratory sandbox investigation

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.

    2017-11-01

    Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.

  3. An implicit adaptation algorithm for a linear model reference control system

    NASA Technical Reports Server (NTRS)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  4. Gravity Waves Generated by Convection: A New Idealized Model Tool and Direct Validation with Satellite Observations

    NASA Astrophysics Data System (ADS)

    Alexander, M. Joan; Stephan, Claudia

    2015-04-01

    In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.

  5. Brain Biology Machine Initiative: Developing Innovative Novel Methods to Improve Neuro-Rehabilitation for Amputees and Treatment for Patients at Remote Sites with Acute Brain Injury

    DTIC Science & Technology

    2010-10-01

    bode well for the future. The paper we submitted to the Journal of Neuroscience detailing the TVAG rabies tracer system was accepted with revisions...of brain electrical activity. Stas Kounitsky successfully completed the port of the new vector-additive implicit (VAI) method for the anisotropic ...Alternating Difference 14 Implicit (ADI) for isotropic head models, and the Vector Additive Implicit (VAI) for anisotropic head models. The ADI method

  6. Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions

    NASA Astrophysics Data System (ADS)

    Nelson, K.; Mechem, D. B.

    2014-12-01

    Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.

  7. Impact of a Stochastic Parameterization Scheme on El Nino-Southern Oscillation in the Community Climate System Model

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.

    2017-12-01

    Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.

  8. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  9. Global Modeling and Data Assimilation. Volume 11; Documentation of the Tangent Linear and Adjoint Models of the Relaxed Arakawa-Schubert Moisture Parameterization of the NASA GEOS-1 GCM; 5.2

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael

    1997-01-01

    A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.

  10. Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows

    DOE PAGES

    Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; ...

    2015-12-08

    In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocitymore » and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  12. Exact expressions and accurate approximations for the dependences of radius and index of refraction of solutions of inorganic solutes on relative humidity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, E.R.; Schwartz, S.

    2010-03-15

    Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volumemore » of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.« less

  13. Incorporation of New Convective Ice Microphysics into the NASA GISS GCM and Impacts on Cloud Ice Water Path (IWP) Simulation

    NASA Technical Reports Server (NTRS)

    Elsaesser, Greg; Del Genio, Anthony

    2015-01-01

    The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high latitude ice IWP that decreased by 50 relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by 15 in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of 2. This was largely driven by IWC in deep-convecting regions of the tropics.Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter.GCM simulations with the improved convective parameterization yield a 50 decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.

  14. Incorporation of New Convective Ice Microphysics into the NASA GISS GCM and Impacts on Cloud Ice Water Path (IWP) Simulation

    NASA Astrophysics Data System (ADS)

    Elsaesser, G.; Del Genio, A. D.

    2015-12-01

    The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high-latitude ice IWP that decreased by ~50% relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by ~15% in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (~200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of ~2. This was largely driven by IWC in deep-convecting regions of the tropics. Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter. GCM simulations with the improved convective parameterization yield a ~50% decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.

  15. Chinese Students' Implicit Theories of Intelligence and Other Personal Attributes: Cross-Domain Generality and Age-Related Differences.

    ERIC Educational Resources Information Center

    Cheng, Zi-Juan; Hau, Kit-Tai; Wen, Jian-Bing; Kong, Chit-Kwong

    Using structural equation modeling (SEM), researchers examined whether there was a general dominating factor that governed students' implicit theories of intelligence, morality, personality, creativity, and social intelligence. The possible age-related changes of students' implicit theories were also studied. In all, 1,650 elementary and junior…

  16. Pre-Service Teachers' Implicit and Explicit Attitudes toward Obesity Influence Their Judgments of Students

    ERIC Educational Resources Information Center

    Glock, Sabine; Beverborg, Arnoud Oude Groote; Müller, Barbara C. N.

    2016-01-01

    Obese children experience disadvantages in school and discrimination from their teachers. Teachers' implicit and explicit attitudes have been identified as contributing to these disadvantages. Drawing on dual process models, we investigated the nature of pre-service teachers' implicit and explicit attitudes, their motivation to respond without…

  17. Modeling particle nucleation and growth over northern California during the 2010 CARES campaign

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.

    2015-07-01

    Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4 while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapors parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates were predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. Differences among the three simulations for the 40-100 nm particle diameter range are mostly associated with the timing of the peak total tendencies that shift the morning increase and afternoon decrease in particle number concentration by up to two hours. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ∼ 36 %.

  18. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less

  19. Non-perturbational surface-wave inversion: A Dix-type relation for surface waves

    USGS Publications Warehouse

    Haney, Matt; Tsai, Victor C.

    2015-01-01

    We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.

  20. Prototype Mcs Parameterization for Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moncrieff, M. W.

    2017-12-01

    Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.

  1. Connecting Free Energy Surfaces in Implicit and Explicit Solvent: an Efficient Method to Compute Conformational and Solvation Free Energies

    PubMed Central

    Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.

    2015-01-01

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174

  2. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    PubMed

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  3. The mixed impact of medical school on medical students' implicit and explicit weight bias.

    PubMed

    Phelan, Sean M; Puhl, Rebecca M; Burke, Sara E; Hardeman, Rachel; Dovidio, John F; Nelson, David B; Przedworski, Julia; Burgess, Diana J; Perry, Sylvia; Yeazel, Mark W; van Ryn, Michelle

    2015-10-01

    Health care trainees demonstrate implicit (automatic, unconscious) and explicit (conscious) bias against people from stigmatised and marginalised social groups, which can negatively influence communication and decision making. Medical schools are well positioned to intervene and reduce bias in new physicians. This study was designed to assess medical school factors that influence change in implicit and explicit bias against individuals from one stigmatised group: people with obesity. This was a prospective cohort study of medical students enrolled at 49 US medical schools randomly selected from all US medical schools within the strata of public and private schools and region. Participants were 1795 medical students surveyed at the beginning of their first year and end of their fourth year. Web-based surveys included measures of weight bias, and medical school experiences and climate. Bias change was compared with changes in bias in the general public over the same period. Linear mixed models were used to assess the impact of curriculum, contact with people with obesity, and faculty role modelling on weight bias change. Increased implicit and explicit biases were associated with less positive contact with patients with obesity and more exposure to faculty role modelling of discriminatory behaviour or negative comments about patients with obesity. Increased implicit bias was associated with training in how to deal with difficult patients. On average, implicit weight bias decreased and explicit bias increased during medical school, over a period of time in which implicit weight bias in the general public increased and explicit bias remained stable. Medical schools may reduce students' weight biases by increasing positive contact between students and patients with obesity, eliminating unprofessional role modelling by faculty members and residents, and altering curricula focused on treating difficult patients. © 2015 John Wiley & Sons Ltd.

  4. Perceived and Implicit Ranking of Academic Journals: An Optimization Choice Model

    ERIC Educational Resources Information Center

    Xie, Frank Tian; Cai, Jane Z.; Pan, Yue

    2012-01-01

    A new system of ranking academic journals is proposed in this study and optimization choice model used to analyze data collected from 346 faculty members in a business discipline. The ranking model uses the aggregation of perceived, implicit sequencing of academic journals by academicians, therefore eliminating several key shortcomings of previous…

  5. Effect of the Implicit Combinatorial Model on Combinatorial Reasoning in Secondary School Pupils.

    ERIC Educational Resources Information Center

    Batanero, Carmen; And Others

    1997-01-01

    Elementary combinatorial problems may be classified into three different combinatorial models: (1) selection; (2) partition; and (3) distribution. The main goal of this research was to determine the effect of the implicit combinatorial model on pupils' combinatorial reasoning before and after instruction. Gives an analysis of variance of the…

  6. Parameterization and Observability Analysis of Scalable Battery Clusters for Onboard Thermal Management

    DTIC Science & Technology

    2011-12-01

    the designed parameterization scheme and adaptive observer. A cylindri- cal battery thermal model in Eq. (1) with parameters of an A123 32157 LiFePO4 ...Morcrette, M. and Delacourt, C. (2010) Thermal modeling of a cylindrical LiFePO4 /graphite lithium-ion battery. Journal of Power Sources. 195, 2961

  7. Implicit Motives and Men’s Perceived Constraint in Fatherhood

    PubMed Central

    Ruppen, Jessica; Waldvogel, Patricia; Ehlert, Ulrike

    2016-01-01

    Research shows that implicit motives influence social relationships. However, little is known about their role in fatherhood and, particularly, how men experience their paternal role. Therefore, this study examined the association of implicit motives and fathers’ perceived constraint due to fatherhood. Furthermore, we explored their relation to fathers’ life satisfaction. Participants were fathers with biological children (N = 276). They were asked to write picture stories, which were then coded for implicit affiliation and power motives. Perceived constraint and life satisfaction were assessed on a visual analog scale. A higher implicit need for affiliation was significantly associated with lower perceived constraint, whereas the implicit need for power had the opposite effect. Perceived constraint had a negative influence on life satisfaction. Structural equation modeling revealed significant indirect effects of implicit affiliation and power motives on life satisfaction mediated by perceived constraint. Our findings indicate that men with a higher implicit need for affiliation experience less constraint due to fatherhood, resulting in higher life satisfaction. The implicit need for power, however, results in more perceived constraint and is related to decreased life satisfaction. PMID:27933023

  8. Modeling the formation and aging of secondary organic aerosols in the Los Angeles metropolitan region during the CalNex 2010 field campaign

    NASA Astrophysics Data System (ADS)

    Hayes, P. L.; Ma, P. K.; Jimenez, J. L.; Zhao, Y.; Robinson, A. L.; Carlton, A. M. G.; Baker, K. R.; Ahmadov, R.; Washenfelder, R. A.; Alvarez, S. L.; Rappenglück, B.; Gilman, J.; Kuster, W.; De Gouw, J. A.; Prevot, A. S.; Zotter, P.; Szidat, S.; Kleindienst, T. E.; Offenberg, J. H.

    2015-12-01

    Several different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) are evaluated using a box model representing the Los Angeles Region during CalNex. The model SOA formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. Including SOA from primary semi-volatile and intermediate volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model/measurement agreement for mass concentration at shorter photochemical ages (0.5 days). Our results strongly suggest that other precursors besides VOCs are needed to explain the observed SOA concentrations. In contrast, all of the literature P-S/IVOC parameterizations over-predict urban SOA formation at long photochemical ages (3 days) compared to observations from multiple sites, which can lead to problems in regional and global modeling. Sensitivity studies that reduce the IVOC emissions by one-half in the model improve SOA predictions at these long ages. In addition, when IVOC emissions in the Robinson et al. parameterization are constrained using recently reported measurements of these species model/measurement agreement is achieved. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16 - 27%, 35 - 61%, and 19 - 35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(±3)%. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in SOA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in urban SOA, possibly cooking emissions, that was not accounted for in those previous studies, and which is higher on weekends.

  9. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  10. Development of the PCAD Model to Assess Biological Significance of Acoustic Disturbance

    DTIC Science & Technology

    2015-09-30

    We identified northern elephant seals and Atlantic bottlenose dolphins as the best species to parameterize the PCAD model. These species represent...transfer functions described above for southern elephant seals, our goals are to parameterize these models to make them applicable to other species and...northern elephant seal demographic data to estimate adult female survival, reproduction, and pup survival as a function of maternal condition. As a major

  11. Building integral projection models: a user's guide

    PubMed Central

    Rees, Mark; Childs, Dylan Z; Ellner, Stephen P; Coulson, Tim

    2014-01-01

    In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. PMID:24219157

  12. Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.

    ERIC Educational Resources Information Center

    Poirier, Louise

    Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…

  13. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.

  14. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  15. Improved parametrization of the growth index for dark energy and DGP models

    NASA Astrophysics Data System (ADS)

    Jing, Jiliang; Chen, Songbai

    2010-03-01

    We propose two improved parameterized form for the growth index of the linear matter perturbations: (I) γ(z)=γ0+(γ∞-γ0)z/z+1 and (II) γ(z)=γ0+γ1 z/z+1 +(γ∞-γ1-γ0)(. With these forms of γ(z), we analyze the accuracy of the approximation the growth factor f by Ωmγ(z) for both the wCDM model and the DGP model. For the first improved parameterized form, we find that the approximation accuracy is enhanced at the high redshifts for both kinds of models, but it is not at the low redshifts. For the second improved parameterized form, it is found that Ωmγ(z) approximates the growth factor f very well for all redshifts. For chosen α, the relative error is below 0.003% for the ΛCDM model and 0.028% for the DGP model when Ωm=0.27. Thus, the second improved parameterized form of γ(z) should be useful for the high precision constraint on the growth index of different models with the observational data. Moreover, we also show that α depends on the equation of state w and the fractional energy density of matter Ωm0, which may help us learn more information about dark energy and DGP models.

  16. An implicit numerical model for multicomponent compressible two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Zidane, Ali; Firoozabadi, Abbas

    2015-11-01

    We introduce a new implicit approach to model multicomponent compressible two-phase flow in porous media with species transfer between the phases. In the implicit discretization of the species transport equation in our formulation we calculate for the first time the derivative of the molar concentration of component i in phase α (cα, i) with respect to the total molar concentration (ci) under the conditions of a constant volume V and temperature T. The species transport equation is discretized by the finite volume (FV) method. The fluxes are calculated based on powerful features of the mixed finite element (MFE) method which provides the pressure at grid-cell interfaces in addition to the pressure at the grid-cell center. The efficiency of the proposed model is demonstrated by comparing our results with three existing implicit compositional models. Our algorithm has low numerical dispersion despite the fact it is based on first-order space discretization. The proposed algorithm is very robust.

  17. Local identifiability and sensitivity analysis of neuromuscular blockade and depth of hypnosis models.

    PubMed

    Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T

    2014-01-01

    This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. A dual-loop model of the human controller in single-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A dual loop model of the human controller in single axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure which involves feeding back that portion of the controlled element output rate which is due to control activity. The sensory inputs to the human controller are assumed to be system error and control force. The former is assumed to be sensed via visual, aural, or tactile displays while the latter is assumed to be sensed in kinesthetic fashion. A nonlinear form of the model is briefly discussed. This model is then linearized and parameterized. A set of general adaptive characteristics for the parameterized model is hypothesized. These characteristics describe the manner in which the parameters in the linearized model will vary with such things as display quality. It is demonstrated that the parameterized model can produce controller describing functions which closely approximate those measured in laboratory tracking tasks for a wide variety of controlled elements.

  19. An Implicit Model Development Process for Bounding External, Seemingly Intangible/Non-Quantifiable Factors

    DTIC Science & Technology

    2017-06-01

    This research expands the modeling and simulation (M and S) body of knowledge through the development of an Implicit Model Development Process (IMDP...When augmented to traditional Model Development Processes (MDP), the IMDP enables the development of models that can address a broader array of...where a broader, more holistic approach of defining a models referent is achieved. Next, the IMDP codifies the process for implementing the improved model

  20. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.

  1. Toward Improved Parameterization of a Meso-Scale Hydrologic Model in a Discontinuous Permafrost, Boreal Forest Ecosystem

    NASA Astrophysics Data System (ADS)

    Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.

    2013-12-01

    The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on coarse scale data. These results suggest that using fine resolution parameterizations can be used to improve meso-scale hydrological modeling in this region.

  2. Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation

    EPA Science Inventory

    Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...

  3. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  4. Kinematic Structural Modelling in Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.

    2017-04-01

    We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.

  5. Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.

    PubMed

    Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong

    2017-12-01

    Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.

  6. Estimating the Contrail Impact on Climate Using the UK Met Office Model

    NASA Astrophysics Data System (ADS)

    Rap, A.; Forster, P. M.

    2008-12-01

    With air travel predicted to increase over the coming century, the emissions associated with air traffic are expected to have a significant warming effect on climate. According to current best estimates, an important contribution comes from contrails. However, as reported by the IPCC fourth assessment report, these current best estimates still have a high uncertainty. The development and validation of contrail parameterizations in global climate models is therefore very important. This current study develops a contrail parameterization within the UK Met Office Climate Model. Using this new parameterization, we estimate that for the 2002 traffic, the global mean annual contrail coverage is approximately 0.11%, a value which in good agreement with several other estimates. The corresponding contrail radiative forcing (RF) is calculated to be approximately 4 and 6 mWm-2 in all-sky and clear-sky conditions, respectively. These values lie within the lower end of the RF range reported by the latest IPCC assessment. The relatively high cloud masking effect on contrails observed by our parameterization compared with other studies is investigated, and a possible cause for this difference is suggested. The effect of the diurnal variations of air traffic on both contrail coverage and contrail RF is also investigated. The new parameterization is also employed in thirty-year slab-ocean model runs in order to give one of the first insights into contrail effects on daily temperature range and the climate impact of contrails.

  7. Technical report series on global modeling and data assimilation. Volume 2: Direct solution of the implicit formulation of fourth order horizontal diffusion for gridpoint models on the sphere

    NASA Technical Reports Server (NTRS)

    Li, Yong; Moorthi, S.; Bates, J. Ray; Suarez, Max J.

    1994-01-01

    High order horizontal diffusion of the form K Delta(exp 2m) is widely used in spectral models as a means of preventing energy accumulation at the shortest resolved scales. In the spectral context, an implicit formation of such diffusion is trivial to implement. The present note describes an efficient method of implementing implicit high order diffusion in global finite difference models. The method expresses the high order diffusion equation as a sequence of equations involving Delta(exp 2). The solution is obtained by combining fast Fourier transforms in longitude with a finite difference solver for the second order ordinary differential equation in latitude. The implicit diffusion routine is suitable for use in any finite difference global model that uses a regular latitude/longitude grid. The absence of a restriction on the timestep makes it particularly suitable for use in semi-Lagrangian models. The scale selectivity of the high order diffusion gives it an advantage over the uncentering method that has been used to control computational noise in two-time-level semi-Lagrangian models.

  8. Implicit Theories about Intelligence and Growth (Personal Best) Goals: Exploring Reciprocal Relationships

    ERIC Educational Resources Information Center

    Martin, Andrew J.

    2015-01-01

    Background: There has been increasing interest in growth approaches to students' academic development, including value-added models, modelling of academic trajectories, growth motivation orientations, growth mindsets, and growth goals. Aims: This study sought to investigate the relationships between implicit theories about intelligence…

  9. Examples of grid generation with implicitly specified surfaces using GridPro (TM)/az3000. 1: Filleted multi-tube configurations

    NASA Technical Reports Server (NTRS)

    Cheng, Zheming; Eiseman, Peter R.

    1995-01-01

    With examples, we illustrate how implicitly specified surfaces can be used for grid generation with GridPro/az3000. The particular examples address two questions: (1) How do you model intersecting tubes with fillets? and (2) How do you generate grids inside the intersected tubes? The implication is much more general. With the results in a forthcoming paper which develops an easy-to-follow procedure for implicit surface modeling, we provide a powerful means for rapid prototyping in grid generation.

  10. Development of Turbulent Biological Closure Parameterizations

    DTIC Science & Technology

    2011-09-30

    LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.

  11. Stochastic Parameterization: Toward a New View of Weather and Climate Models

    DOE PAGES

    Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...

    2017-03-31

    The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less

  12. Stochastic Parameterization: Toward a New View of Weather and Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berner, Judith; Achatz, Ulrich; Batté, Lauriane

    The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less

  13. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  14. Studies in the parameterization of cloudiness in climate models and the analysis of radiation fields in general circulation models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1990-01-01

    Broad-band parameterizations for atmospheric radiative transfer were developed for clear and cloudy skies. These were in the shortwave and longwave regions of the spectrum. These models were compared with other models in an international effort called ICRCCM (Intercomparison of Radiation Codes for Climate Models). The radiation package developed was used for simulations of a General Circulation Model (GCM). A synopsis is provided of the research accomplishments in the two areas separately. Details are available in the published literature.

  15. Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs

    NASA Astrophysics Data System (ADS)

    Jayathilake, D. I.; Smith, T. J.

    2017-12-01

    Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.

  16. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    NASA Astrophysics Data System (ADS)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  17. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  18. 10 Ways to Improve the Representation of MCSs in Climate Models

    NASA Astrophysics Data System (ADS)

    Schumacher, C.

    2017-12-01

    1. The first way to improve the representation of mesoscale convective systems (MCSs) in global climate models (GCMs) is to recognize that MCSs are important to climate. That may be obvious to most of the people attending this session, but it cannot be taken for granted in the wider community. The fact that MCSs produce large amounts of the global rainfall and that they dramatically impact the atmosphere via transports of heat, moisture, and momentum must be continuously stressed. 2-4. There has traditionally been three approaches to representing MCSs and/or their impacts in GCMs. The first is to focus on improving cumulus parameterizations by implementing things like cold pools that are assumed to better organize convection. The second is to focus on including mesoscale processes in the cumulus parameterization such as mesoscale vertical motions. The third is to just buy your way out with higher resolution using techniques like super-parameterization or global cloud-resolving model runs. All of these approaches have their pros and cons, but none of them satisfactorily solve the MCS climate modeling problem. 5-10. Looking forward, there is active discussion and new ideas in the modeling community on how to better represent convective organization in models. A number of ideas are a dramatic shift from the traditional plume-based cumulus parameterizations of most GCMs, such as implementing mesoscale parmaterizations based on their physical impacts (e.g., via heating), on empirical relationships based on big data/machine learning, or on stochastic approaches. Regardless of the technique employed, smart evaluation processes using observations are paramount to refining and constraining the inevitable tunable parameters in any parameterization.

  19. Are Atmospheric Updrafts a Key to Unlocking Climate Forcing and Sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-06-08

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  20. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  1. Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel

    2018-07-01

    This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.

  2. Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects

    NASA Astrophysics Data System (ADS)

    Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.

    2008-10-01

    A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.

  3. A satellite observation test bed for cloud parameterization development

    NASA Astrophysics Data System (ADS)

    Lebsock, M. D.; Suselj, K.

    2015-12-01

    We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.

  4. Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM

    NASA Technical Reports Server (NTRS)

    Yao, Mao-Sung; Cheng, Ye

    2013-01-01

    The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.

  5. A general science-based framework for dynamical spatio-temporal models

    USGS Publications Warehouse

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.

  6. Impact of Parameterized Lee Wave Drag on the Energy Budget of an Eddying Global Ocean Model

    DTIC Science & Technology

    2013-08-26

    Teixeira, J., Peng, M., Hogan, T.F., Pauley, R., 2002. Navy Operational Global Atmospheric Prediction System (NOGAPS): Forcing for ocean models...Impact of parameterized lee wave drag on the energy budget of an eddying global ocean model David S. Trossman a,⇑, Brian K. Arbic a, Stephen T...input and output terms in the total mechanical energy budget of a hybrid coordinate high-resolution global ocean general circulation model forced by winds

  7. Self-Love or Other-Love? Explicit Other-Preference but Implicit Self-Preference

    PubMed Central

    Gebauer, Jochen E.; Göritz, Anja S.; Hofmann, Wilhelm; Sedikides, Constantine

    2012-01-01

    Do humans prefer the self even over their favorite other person? This question has pervaded philosophy and social-behavioral sciences. Psychology’s distinction between explicit and implicit preferences calls for a two-tiered solution. Our evolutionarily-based Dissociative Self-Preference Model offers two hypotheses. Other-preferences prevail at an explicit level, because they convey caring for others, which strengthens interpersonal bonds–a major evolutionary advantage. Self-preferences, however, prevail at an implicit level, because they facilitate self-serving automatic behavior, which favors the self in life-or-die situations–also a major evolutionary advantage. We examined the data of 1,519 participants, who completed an explicit measure and one of five implicit measures of preferences for self versus favorite other. The results were consistent with the Dissociative Self-Preference Model. Explicitly, participants preferred their favorite other over the self. Implicitly, however, they preferred the self over their favorite other (be it their child, romantic partner, or best friend). Results are discussed in relation to evolutionary theorizing on self-deception. PMID:22848605

  8. Evaluating Cloud Initialization in a Convection-permit NWP Model

    NASA Astrophysics Data System (ADS)

    Li, Jia; Chen, Baode

    2015-04-01

    In general, to avoid "double counting precipitation" problem, in convection permit NWP models, it was a common practice to turn off convective parameterization. However, if there were not any cloud information in the initial conditions, the occurrence of precipitation could be delayed due to spin-up of cloud field or microphysical variables. In this study, we utilized the complex cloud analysis package from the Advanced Regional Prediction System (ARPS) to adjust the initial states of the model on water substance, such as cloud water, cloud ice, rain water, et al., that is, to initialize the microphysical variables (i.e., hydrometers), mainly based on radar reflectivity observations. Using the Advanced Research WRF (ARW) model, numerical experiments with/without cloud initialization and convective parameterization were carried out at grey-zone resolutions (i.e. 1, 3, and 9 km). The results from the experiments without convective parameterization indicate that model ignition with radar reflectivity can significantly reduce spin-up time and accurately simulate precipitation at the initial time. In addition, it helps to improve location and intensity of predicted precipitation. With grey-zone resolutions (i.e. 1, 3, and 9 km), using the cumulus convective parameterization scheme (without radar data) cannot produce realistic precipitation at the early time. The issues related to microphysical parametrization associated with cloud initialization were also discussed.

  9. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  10. On the Leaky Math Pipeline: Comparing Implicit Math-Gender Stereotypes and Math Withdrawal in Female and Male Children and Adolescents

    ERIC Educational Resources Information Center

    Steffens, Melanie C.; Jelenec, Petra; Noack, Peter

    2010-01-01

    Many models assume that habitual human behavior is guided by spontaneous, automatic, or implicit processes rather than by deliberate, rule-based, or explicit processes. Thus, math-ability self-concepts and math performance could be related to implicit math-gender stereotypes in addition to explicit stereotypes. Two studies assessed at what age…

  11. Accounting for successful control of implicit racial bias: the roles of association activation, response monitoring, and overcoming bias.

    PubMed

    Gonsalkorale, Karen; Sherman, Jeffrey W; Allen, Thomas J; Klauer, Karl Christoph; Amodio, David M

    2011-11-01

    Individuals who are primarily internally motivated to respond without prejudice show less bias on implicit measures than individuals who are externally motivated or unmotivated to respond without prejudice. However, it is not clear why these individuals exhibit less implicit bias than others. We used the Quad model to examine motivation-based individual differences in three processes that have been proposed to account for this effect: activation of associations, overcoming associations, and response monitoring. Participants completed an implicit measure of stereotyping (Study 1) or racial attitudes (Study 2). Modeling of the data revealed that individuals who were internally (but not externally) motivated to respond without prejudice showed enhanced detection and reduced activation of biased associations, suggesting that these processes may be key to achieving unbiased responding.

  12. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    NASA Astrophysics Data System (ADS)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  13. Towards an explicit account of implicit learning.

    PubMed

    Forkstam, Christian; Petersson, Karl Magnus

    2005-08-01

    The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  15. Surface circulation in the Gulf of Cadiz: Model and mean flow structure

    NASA Astrophysics Data System (ADS)

    Peliz, Alvaro; Dubert, Jesus; Marchesiello, Patrick; Teles-Machado, Ana

    2007-11-01

    The mean flow structure of the Gulf of Cadiz is studied using a numerical model. The model consists of a set of one-way nested configurations attaining resolutions on the order of 2.6 km in the region of the Gulf of Cadiz. In the large-scale configuration, the entrainment of the Mediterranean Water is parameterized implicitly through a nudging term. In medium- and small-scale nested configurations, the Mediterranean outflow is introduced explicitly. The model reproduces all the known features of the Azores Current and of the circulation inside the Gulf of Cadiz. A realistic Mediterranean Undercurrent is generated and Meddies develop at proper depths on the southwest tip of the Iberian slope. The hypothesis that the Azores Current may generate in association with the Mediterranean outflow (β-plume theories) is confirmed by the model results. The time-mean flow is dominated by a cyclonic cell generated in the gulf which expands westward and has transports ranging from 4 to 5 Sv. The connection between the cell and the Azores Current is analyzed. At the scale of the Gulf, the time-mean flow cell is composed by the westward Mediterranean Undercurrent, and by a counterflow running eastward over the outer edge of the Mediterranean Undercurrent deeper vein, as the latter is forced downslope. This counterflow feeds the entrainment at the depths of the Mediterranean Undercurrent and the Atlantic inflow at shallower levels. Coastward and upslope of this recirculation cell, a second current running equatorward all the way along the northern part of the gulf is revealed. This current is a very robust model result that promotes continuity between the southwestern Iberian coast and the Strait of Gibraltar, and helps explain many observations and recurrent SST features of the Gulf of Cadiz.

  16. Parameterization of planetary wave breaking in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  17. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme

    NASA Astrophysics Data System (ADS)

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres

    2007-10-01

    Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.

  18. Comparison of Ice Cloud Particle Sizes Retrieved from Satellite Data Derived from In Situ Measurements

    NASA Technical Reports Server (NTRS)

    Han, Qingyuan; Rossow, William B.; Chou, Joyce; Welch, Ronald M.

    1997-01-01

    Cloud microphysical parameterizations have attracted a great deal of attention in recent years due to their effect on cloud radiative properties and cloud-related hydrological processes in large-scale models. The parameterization of cirrus particle size has been demonstrated as an indispensable component in the climate feedback analysis. Therefore, global-scale, long-term observations of cirrus particle sizes are required both as a basis of and as a validation of parameterizations for climate models. While there is a global scale, long-term survey of water cloud droplet sizes (Han et al.), there is no comparable study for cirrus ice crystals. This study is an effort to supply such a data set.

  19. Paramaterization of a coarse-grained model for linear alkylbenzene sulfonate surfactants and molecular dynamics studies of their self-assembly in aqueous solution

    NASA Astrophysics Data System (ADS)

    He, Xibing; Shinoda, Wataru; DeVane, Russell; Anderson, Kelly L.; Klein, Michael L.

    2010-02-01

    A coarse-grained (CG) forcefield for linear alkylbenzene sulfonates (LAS) was systematically parameterized. Thermodynamic data from experiments and structural data obtained from all-atom molecular dynamics were used as targets to parameterize CG potentials for the bonded and non-bonded interactions. The added computational efficiency permits one to employ computer simulation to probe the self-assembly of LAS aqueous solutions into different morphologies starting from a random configuration. The present CG model is shown to accurately reproduce the phase behavior of solutions of pure isomers of sodium dodecylbenzene sulfonate, despite the fact that phase behavior was not directly taken into account in the forcefield parameterization.

  20. A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Eagleson, Peter S.

    1990-01-01

    A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.

  1. Numerical Modeling of the Global Atmosphere

    NASA Technical Reports Server (NTRS)

    Arakawa, Akio; Mechoso, Carlos R.

    1996-01-01

    Under this grant, we continued development and evaluation of the updraft downdraft model for cumulus parameterization. The model includes the mass, rainwater and vertical momentum budget equations for both updrafts and downdrafts. The rainwater generated in an updraft falls partly inside and partly outside the updraft. Two types of stationary solutions are identified for the coupled rainwater budget and vertical momentum equations: (1) solutions for small tilting angles, which are unstable; (2) solutions for large tilting angles, which are stable. In practical applications, we select the smallest stable tilting angle as an optimum value. The model has been incorporated into the Arakawa-Schubert (A-S) cumulus parameterization. The results of semi-prognostic and single-column prognostic tests of the revised A-S parameterization show drastic improvement in predicting the humidity field. Cheng and Arakawa presents the rationale and basic design of the updraft-downdraft model, together with these test results. Cheng and Arakawa, on the other hand gives technical details of the model as implemented in current version of the UCLA GCM.

  2. Computational investigation of the conformational profile of the four stereomers of Ac-L-Pro-c3Phe-NHMe (c3Phe= 2,3-methanophenylalanine).

    PubMed

    Rodriguez, Alejandro; Canto, Josep; Corcho, Francesc J; Perez, Juan J

    2009-01-01

    The present report regards a computational study aimed at assessing the conformational profile of the four stereoisomers of the peptide Ace-Pro-c3Phe-NMe, previously reported to exhibit beta-turn structures in dichloromethane with different type I/type II beta-turn profiles. Molecular systems were represented at the molecular mechanics level using the parm96 parameterization of the AMBER force field. Calculations were carried out in dichloromethane using an implicit solvent approach. Characterization of the conformational features of the peptide analogs was carried out using simulated annealing (SA), molecular dynamics (MD) and replica exchange molecular dynamics (REMD). Present results show that MD calculations do not provide a reasonable sampling after 300 ns. In contrast, both SA and REMD provide similar results and agree well with experimental observations. Copyright 2009 Wiley Periodicals, Inc.

  3. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  4. Modeling the energy balance in Marseille: Sensitivity to roughness length parameterizations and thermal admittance

    NASA Astrophysics Data System (ADS)

    Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.

    2008-08-01

    During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.

  5. A new scheme for the parameterization of the turbulent planetary boundary layer in the GLAS fourth order GCM

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.

    1985-01-01

    Methods being used to increase the horizontal and vertical resolution and to implement more sophisticated parameterization schemes for general circulation models (GCM) run on newer, more powerful computers are described. Attention is focused on the NASA-Goddard Laboratory for Atmospherics fourth order GCM. A new planetary boundary layer (PBL) model has been developed which features explicit resolution of two or more layers. Numerical models are presented for parameterizing the turbulent vertical heat, momentum and moisture fluxes at the earth's surface and between the layers in the PBL model. An extended Monin-Obhukov similarity scheme is applied to express the relationships between the lowest levels of the GCM and the surface fluxes. On-line weather prediction experiments are to be run to test the effects of the higher resolution thereby obtained for dynamic atmospheric processes.

  6. Evapotranspiration and runoff from large land areas: Land surface hydrology for atmospheric general circulation models

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    A land surface hydrology parameterization for use in atmospheric GCM's is presented. The parameterization incorporates subgrid scale variability in topography, soils, soil moisture and precipitation. The framework of the model is the statistical distribution of a topography-soils index, which controls the local water balance fluxes, and is therefore taken to represent the large land area. Spatially variable water balance fluxes are integrated with respect to the topography-soils index to yield our large topography-soils distribution, and interval responses are weighted by the probability of occurrence of the interval. Grid square averaged land surface fluxes result. The model functions independently as a macroscale water balance model. Runoff ratio and evapotranspiration efficiency parameterizations are derived and are shown to depend on the spatial variability of the above mentioned properties and processes, as well as the dynamics of land surface-atmosphere interactions.

  7. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  8. Testing a common ice-ocean parameterization with laboratory experiments

    NASA Astrophysics Data System (ADS)

    McConnochie, C. D.; Kerr, R. C.

    2017-07-01

    Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.

  9. A second-order Budkyo-type parameterization of landsurface hydrology

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1982-01-01

    A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.

  10. A Generalized Simple Formulation of Convective Adjustment Timescale for Cumulus Convection Parameterizations

    EPA Science Inventory

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...

  11. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5

    EPA Science Inventory

    The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...

  12. A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation

    EPA Science Inventory

    Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...

  13. Quantifying the measurement errors in a LI-6400 gas exchange system and their effects on the parameterization of Farquhar et al. model for C3 leaves

    USDA-ARS?s Scientific Manuscript database

    The LI-6400 gas exchange system (Li-Cor, Inc, Lincoln, NE, USA) has been widely used for the measurement of net gas exchanges and calibration/parameterization of leaf models. Measurement errors due to diffusive leakages of water vapor and carbon dioxide between inside and outside of the leaf chamber...

  14. Semantic concept-enriched dependence model for medical information retrieval.

    PubMed

    Choi, Sungbin; Choi, Jinwook; Yoo, Sooyoung; Kim, Heechun; Lee, Youngho

    2014-02-01

    In medical information retrieval research, semantic resources have been mostly used by expanding the original query terms or estimating the concept importance weight. However, implicit term-dependency information contained in semantic concept terms has been overlooked or at least underused in most previous studies. In this study, we incorporate a semantic concept-based term-dependence feature into a formal retrieval model to improve its ranking performance. Standardized medical concept terms used by medical professionals were assumed to have implicit dependency within the same concept. We hypothesized that, by elaborately revising the ranking algorithms to favor documents that preserve those implicit dependencies, the ranking performance could be improved. The implicit dependence features are harvested from the original query using MetaMap. These semantic concept-based dependence features were incorporated into a semantic concept-enriched dependence model (SCDM). We designed four different variants of the model, with each variant having distinct characteristics in the feature formulation method. We performed leave-one-out cross validations on both a clinical document corpus (TREC Medical records track) and a medical literature corpus (OHSUMED), which are representative test collections in medical information retrieval research. Our semantic concept-enriched dependence model consistently outperformed other state-of-the-art retrieval methods. Analysis shows that the performance gain has occurred independently of the concept's explicit importance in the query. By capturing implicit knowledge with regard to the query term relationships and incorporating them into a ranking model, we could build a more robust and effective retrieval model, independent of the concept importance. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.

  16. Investigating the Sensitivity of Nucleation Parameterization on Ice Growth

    NASA Astrophysics Data System (ADS)

    Gaudet, L.; Sulia, K. J.

    2017-12-01

    The accurate prediction of precipitation from lake-effect snow events associated with the Great Lakes region depends on the parameterization of thermodynamic and microphysical processes, including the formation and subsequent growth of frozen hydrometeors. More specifically, the formation of ice hydrometeors has been represented through varying forms of ice nucleation parameterizations considering the different nucleation modes (e.g., deposition, condensation-freezing, homogeneous). These parameterizations have been developed from in-situ measurements and laboratory observations. A suite of nucleation parameterizations consisting of those published in Meyers et al. (1992) and DeMott et al. (2010) as well as varying ice nuclei data sources are coupled with the Adaptive Habit Model (AHM, Harrington et al. 2013), a microphysics module where ice crystal aspect ratio and density are predicted and evolve in time. Simulations are run with the AHM which is implemented in the Weather Research and Forecasting (WRF) model to investigate the effect of ice nucleation parameterization on the non-spherical growth and evolution of ice crystals and the subsequent effects on liquid-ice cloud-phase partitioning. Specific lake-effect storms that were observed during the Ontario Winter Lake-Effect Systems (OWLeS) field campaign (Kristovich et al. 2017) are examined to elucidate this potential microphysical effect. Analysis of these modeled events is aided by dual-polarization radar data from the WSR-88D in Montague, New York (KTYX). This enables a comparison of the modeled and observed polarmetric and microphysical profiles of the lake-effect clouds, which involves investigating signatures of reflectivity, specific differential phase, correlation coefficient, and differential reflectivity. Microphysical features of lake-effect bands, such as ice, snow, and liquid mixing ratios, ice crystal aspect ratio, and ice density are analyzed to understand signatures in the aforementioned modeled dual-polarization radar variables. Hence, this research helps to determine an ice nucleation scheme that will best model observations of lake-effect clouds producing snow off of Lake Ontario and Lake Erie, and analyses will highlight the sensitivity of the evolution of the cases to a given nucleation scheme.

  17. Modeling the formation and aging of secondary organic aerosols in Los Angeles during CalNex 2010

    NASA Astrophysics Data System (ADS)

    Hayes, P. L.; Carlton, A. G.; Baker, K. R.; Ahmadov, R.; Washenfelder, R. A.; Alvarez, S.; Rappenglück, B.; Gilman, J. B.; Kuster, W. C.; de Gouw, J. A.; Zotter, P.; Prévôt, A. S. H.; Szidat, S.; Kleindienst, T. E.; Offenberg, J. H.; Jimenez, J. L.

    2014-12-01

    Four different parameterizations for the formation and evolution of secondary organic aerosol (SOA) are evaluated using a 0-D box model representing the Los Angeles Metropolitan Region during the CalNex 2010 field campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model-measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model/measurement agreement for mass concentration. When comparing the three parameterizations, the Grieshop et al. (2009) parameterization more accurately reproduces both the SOA mass concentration and oxygen-to-carbon ratio inside the urban area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the parameterizations over-predict urban SOA formation at long photochemical ages (≈ 3 days) compared to observations from multiple sites, which can lead to problems in regional and global modeling. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Polycyclic aromatic hydrocarbons (PAHs) are less important precursors and contribute less than 4% of the SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16-27, 35-61, and 19-35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71 (±3) %. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m-3 is also present due to the long distance transport of highly aged OA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, 1/3 of which is likely to be non-fossil.

  18. Finite-difference model for 3-D flow in bays and estuaries

    USGS Publications Warehouse

    Smith, Peter E.; Larock, Bruce E.; ,

    1993-01-01

    This paper describes a semi-implicit finite-difference model for the numerical solution of three-dimensional flow in bays and estuaries. The model treats the gravity wave and vertical diffusion terms in the governing equations implicitly, and other terms explicitly. The model achieves essentially second-order accurate and stable solutions in strongly nonlinear problems by using a three-time-level leapfrog-trapezoidal scheme for the time integration.

  19. FY 2010 Fourth Quarter Report: Evaluation of the Dependency of Drizzle Formation on Aerosol Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, W; McGraw, R; Liu, Y

    Metric for Quarter 4: Report results of implementation of composite parameterization in single-column model (SCM) to explore the dependency of drizzle formation on aerosol properties. To better represent VOCALS conditions during a test flight, the Liu-Duam-McGraw (LDM) drizzle parameterization is implemented in the high-resolution Weather Research and Forecasting (WRF) model, as well as in the single-column Community Atmosphere Model (CAM), to explore this dependency.

  20. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.

    PubMed

    Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C

    2017-01-01

    Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.

  1. Building integral projection models: a user's guide.

    PubMed

    Rees, Mark; Childs, Dylan Z; Ellner, Stephen P

    2014-05-01

    In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  2. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  3. Free energy landscape of protein folding in water: explicit vs. implicit solvent.

    PubMed

    Zhou, Ruhong

    2003-11-01

    The Generalized Born (GB) continuum solvent model is arguably the most widely used implicit solvent model in protein folding and protein structure prediction simulations; however, it still remains an open question on how well the model behaves in these large-scale simulations. The current study uses the beta-hairpin from C-terminus of protein G as an example to explore the folding free energy landscape with various GB models, and the results are compared to the explicit solvent simulations and experiments. All free energy landscapes are obtained from extensive conformation space sampling with a highly parallel replica exchange method. Because solvation model parameters are strongly coupled with force fields, five different force field/solvation model combinations are examined and compared in this study, namely the explicit solvent model: OPLSAA/SPC model, and the implicit solvent models: OPLSAA/SGB (Surface GB), AMBER94/GBSA (GB with Solvent Accessible Surface Area), AMBER96/GBSA, and AMBER99/GBSA. Surprisingly, we find that the free energy landscapes from implicit solvent models are quite different from that of the explicit solvent model. Except for AMBER96/GBSA, all other implicit solvent models find the lowest free energy state not the native state. All implicit solvent models show erroneous salt-bridge effects between charged residues, particularly in OPLSAA/SGB model, where the overly strong salt-bridge effect results in an overweighting of a non-native structure with one hydrophobic residue F52 expelled from the hydrophobic core in order to make better salt bridges. On the other hand, both AMBER94/GBSA and AMBER99/GBSA models turn the beta-hairpin in to an alpha-helix, and the alpha-helical content is much higher than the previously reported alpha-helices in an explicit solvent simulation with AMBER94 (AMBER94/TIP3P). Only AMBER96/GBSA shows a reasonable free energy landscape with the lowest free energy structure the native one despite an erroneous salt-bridge between D47 and K50. Detailed results on free energy contour maps, lowest free energy structures, distribution of native contacts, alpha-helical content during the folding process, NOE comparison with NMR, and temperature dependences are reported and discussed for all five models. Copyright 2003 Wiley-Liss, Inc.

  4. On the context dependency of implicit self-esteem in social anxiety disorder.

    PubMed

    Hiller, Thomas S; Steffens, Melanie C; Ritter, Viktoria; Stangier, Ulrich

    2017-12-01

    Cognitive models assume that negative self-evaluations are automatically activated in individuals with Social Anxiety Disorder (SAD) during social situations, increasing their individual level of anxiety. This study examined automatic self-evaluations (i.e., implicit self-esteem) and state anxiety in a group of individuals with SAD (n = 45) and a non-clinical comparison group (NC; n = 46). Participants were randomly assigned to either a speech condition with social threat induction (giving an impromptu speech) or to a no-speech condition without social threat induction. We measured implicit self-esteem with an Implicit Association Test (IAT). Implicit self-esteem differed significantly between SAD and NC groups under the speech condition but not under the no-speech condition. The SAD group showed lower implicit self-esteem than the NC group under the speech-condition. State anxiety was significantly higher under the speech condition than under the no-speech condition in the SAD group but not in the NC group. Mediation analyses supported the idea that for the SAD group, the effect of experimental condition on state anxiety was mediated by implicit self-esteem. The causal relation between implicit self-esteem and state anxiety could not be determined. The findings corroborate hypotheses derived from cognitive models of SAD: Automatic self-evaluations were negatively biased in individuals with SAD facing social threat and showed an inverse relationship to levels of state anxiety. However, automatic self-evaluations in individuals with SAD can be unbiased (similar to NC) in situations without social threat. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Explicit and implicit learning: The case of computer programming

    NASA Astrophysics Data System (ADS)

    Mancy, Rebecca

    The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.

  6. Evaluation of Multiple Mechanistic Hypotheses of Leaf Photosynthesis and Stomatal Conductance against Diurnal and Seasonal Data from Two Contrasting Panamanian Tropical Forests

    NASA Astrophysics Data System (ADS)

    Serbin, S.; Walker, A. P.; Wu, J.; Ely, K.; Rogers, A.; Wolfe, B.

    2017-12-01

    Tropical forests play a key role in regulating the global carbon (C), water, and energy cycles and stores, as well as influence climate through the exchanges of mass and energy with the atmosphere. However, projected changes in temperature and precipitation patterns are expected to impact the tropics and the strength of the tropical C sink, likely resulting in significant climate feedbacks. Moreover, the impact of stronger, longer, and more extensive droughts not well understood. Critical for the accurate modeling of the tropical C and water cycle in Earth System Models (ESMs) is the representation of the coupled photosynthetic and stomatal conductance processes and how these processes are impacted by environmental and other drivers. Moreover, the parameterization and representation of these processes is an important consideration for ESM projections. We use a novel model framework, the Multi-Assumption Architecture and Testbed (MAAT), together with the open-source bioinformatics toolbox, the Predictive Ecosystem Analyzer (PEcAn), to explore the impact of the multiple mechanistic hypotheses of coupled photosynthesis and stomatal conductance as well as the additional uncertainty related to model parameterization. Our goal was to better understand how model choice and parameterization influences diurnal and seasonal modeling of leaf-level photosynthesis and stomatal conductance. We focused on the 2016 ENSO period and starting in February, monthly measurements of diurnal photosynthesis and conductance were made on 7-9 dominant species at the two Smithsonian canopy crane sites. This benchmark dataset was used to test different representations of stomatal conductance and photosynthetic parameterizations with the MAAT model, running within PEcAn. The MAAT model allows for the easy selection of competing hypotheses to test different photosynthetic modeling approaches while PEcAn provides the ability to explore the uncertainties introduced through parameterization. We found that stomatal choice can play a large role in model-data mismatch and observational constraints can be used to reduce simulated model spread, but can also result in large model disagreements with measurements. These results will be used to help inform the modeling of photosynthesis in tropical systems for the larger ESM community.

  7. Trends in parameterization, economics and host behaviour in influenza pandemic modelling: a review and reporting protocol

    PubMed Central

    2013-01-01

    Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557

  8. Final Report for “Simulating the Arctic Winter Longwave Indirect Effects. A New Parameterization for Frost Flower Aerosol Salt Emissions” (DESC0006679) for 9/15/2011 through 9/14/2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, Lynn M.; Somerville, Richard C.J.; Burrows, Susannah

    Description of the Project: This project has improved the aerosol formulation in a global climate model by using innovative new field and laboratory observations to develop and implement a novel wind-driven sea ice aerosol flux parameterization. This work fills a critical gap in the understanding of clouds, aerosol, and radiation in polar regions by addressing one of the largest missing particle sources in aerosol-climate modeling. Recent measurements of Arctic organic and inorganic aerosol indicate that the largest source of natural aerosol during the Arctic winter is emitted from crystal structures, known as frost flowers, formed on a newly frozen seamore » ice surface [Shaw et al., 2010]. We have implemented the new parameterization in an updated climate model making it the first capable of investigating how polar natural aerosol-cloud indirect effects relate to this important and previously unrecognized sea ice source. The parameterization is constrained by Arctic ARM in situ cloud and radiation data. The modified climate model has been used to quantify the potential pan-Arctic radiative forcing and aerosol indirect effects due to this missing source. This research supported the work of one postdoc (Li Xu) for two years and contributed to the training and research of an undergraduate student. This research allowed us to establish a collaboration between SIO and PNNL in order to contribute the frost flower parameterization to the new ACME model. One peer-reviewed publications has already resulted from this work, and a manuscript for a second publication has been completed. Additional publications from the PNNL collaboration are expected to follow.« less

  9. High-Order/Low-Order methods for ocean modeling

    DOE PAGES

    Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...

    2015-06-01

    In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.

  10. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  11. A masked negative self-esteem? Implicit and explicit self-esteem in patients with Narcissistic Personality Disorder.

    PubMed

    Marissen, Marlies A E; Brouwer, Marlies E; Hiemstra, Annemarie M F; Deen, Mathijs L; Franken, Ingmar H A

    2016-08-30

    The mask model of narcissism states that the narcissistic traits of patients with NPD are the result of a compensatory reaction to underlying ego fragility. This model assumes that high explicit self-esteem masks low implicit self-esteem. However, research on narcissism has predominantly focused on non-clinical participants and data derived from patients diagnosed with Narcissistic Personality Disorder (NPD) remain scarce. Therefore, the goal of the present study was to test the mask model hypothesis of narcissism among patients with NPD. Male patients with NPD were compared to patients with other PD's and healthy participants on implicit and explicit self-esteem. NPD patients did not differ in levels of explicit and implicit self-esteem compared to both the psychiatric and the healthy control group. Overall, the current study found no evidence in support of the mask model of narcissism among a clinical group. This implicates that it might not be relevant for clinicians to focus treatment of NPD on an underlying negative self-esteem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang J.

    2016-11-07

    The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.

  13. Use of Rare Earth Elements in investigations of aeolian processes

    USDA-ARS?s Scientific Manuscript database

    The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...

  14. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS

    EPA Science Inventory

    The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...

  15. Implicit theories of a desire for fame.

    PubMed

    Maltby, John; Day, Liz; Giles, David; Gillett, Raphael; Quick, Marianne; Langcaster-James, Honey; Linley, P Alex

    2008-05-01

    The aim of the present studies was to generate implicit theories of a desire for fame among the general population. In Study 1, we were able to develop a nine-factor analytic model of conceptions of the desire to be famous that initially comprised nine separate factors; ambition, meaning derived through comparison with others, psychologically vulnerable, attention seeking, conceitedness, social access, altruistic, positive affect, and glamour. Analysis that sought to examine replicability among these factors suggested that three factors (altruistic, positive affect, and glamour) neither display factor congruence nor display adequate internal reliability. A second study examined the validity of these factors in predicting profiles of individuals who may desire fame. The findings from this study suggested that two of the nine factors (positive affect and altruism) could not be considered strong factors within the model. Overall, the findings suggest that implicit theories of a desire for fame comprise six factors. The discussion focuses on how an implicit model of a desire for fame might progress into formal theories of a desire for fame.

  16. Parameterized hardware description as object oriented hardware model implementation

    NASA Astrophysics Data System (ADS)

    Drabik, Pawel K.

    2010-09-01

    The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.

  17. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    EPA Pesticide Factsheets

    The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).

  18. Ecosystem biogeochemistry model parameterization: Do more flux data result in a better model in predicting carbon flux?

    DOE PAGES

    Zhu, Qing; Zhuang, Qianlai

    2015-12-21

    Reliability of terrestrial ecosystem models highly depends on the quantity and quality of thedata that have been used to calibrate the models. Nowadays, in situ observations of carbon fluxes areabundant. However, the knowledge of how much data (data length) and which subset of the time seriesdata (data period) should be used to effectively calibrate the model is still lacking. This study uses theAmeriFlux carbon flux data to parameterize the Terrestrial Ecosystem Model (TEM) with an adjoint-baseddata assimilation technique for various ecosystem types. Parameterization experiments are thus conductedto explore the impact of both data length and data period on the uncertaintymore » reduction of the posteriormodel parameters and the quantification of site and regional carbon dynamics. We find that: the modelis better constrained when it uses two-year data comparing to using one-year data. Further, two-year datais sufficient in calibrating TEM’s carbon dynamics, since using three-year data could only marginallyimprove the model performance at our study sites; the model is better constrained with the data thathave a higher‘‘climate variability’’than that having a lower one. The climate variability is used to measurethe overall possibility of the ecosystem to experience all climatic conditions including drought and extremeair temperatures and radiation; the U.S. regional simulations indicate that the effect of calibration datalength on carbon dynamics is amplified at regional and temporal scales, leading to large discrepanciesamong different parameterization experiments, especially in July and August. Our findings areconditioned on the specific model we used and the calibration sites we selected. The optimal calibrationdata length may not be suitable for other models. However, this study demonstrates that there may exist athreshold for calibration data length and simply using more data would not guarantee a better modelparameterization and prediction. More importantly, climate variability might be an effective indicator ofinformation within the data, which could help data selection for model parameterization. As a result, we believe ourfindings will benefit the ecosystem modeling community in using multiple-year data to improve modelpredictability.« less

  19. Ecosystem biogeochemistry model parameterization: Do more flux data result in a better model in predicting carbon flux?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qing; Zhuang, Qianlai

    Reliability of terrestrial ecosystem models highly depends on the quantity and quality of thedata that have been used to calibrate the models. Nowadays, in situ observations of carbon fluxes areabundant. However, the knowledge of how much data (data length) and which subset of the time seriesdata (data period) should be used to effectively calibrate the model is still lacking. This study uses theAmeriFlux carbon flux data to parameterize the Terrestrial Ecosystem Model (TEM) with an adjoint-baseddata assimilation technique for various ecosystem types. Parameterization experiments are thus conductedto explore the impact of both data length and data period on the uncertaintymore » reduction of the posteriormodel parameters and the quantification of site and regional carbon dynamics. We find that: the modelis better constrained when it uses two-year data comparing to using one-year data. Further, two-year datais sufficient in calibrating TEM’s carbon dynamics, since using three-year data could only marginallyimprove the model performance at our study sites; the model is better constrained with the data thathave a higher‘‘climate variability’’than that having a lower one. The climate variability is used to measurethe overall possibility of the ecosystem to experience all climatic conditions including drought and extremeair temperatures and radiation; the U.S. regional simulations indicate that the effect of calibration datalength on carbon dynamics is amplified at regional and temporal scales, leading to large discrepanciesamong different parameterization experiments, especially in July and August. Our findings areconditioned on the specific model we used and the calibration sites we selected. The optimal calibrationdata length may not be suitable for other models. However, this study demonstrates that there may exist athreshold for calibration data length and simply using more data would not guarantee a better modelparameterization and prediction. More importantly, climate variability might be an effective indicator ofinformation within the data, which could help data selection for model parameterization. As a result, we believe ourfindings will benefit the ecosystem modeling community in using multiple-year data to improve modelpredictability.« less

  20. Evaluation of different parameterizations of the spatial heterogeneity of subsurface storage capacity for hourly runoff simulation in boreal mountainous watershed

    NASA Astrophysics Data System (ADS)

    Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur

    2015-03-01

    Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.

  1. A diapycnal diffusivity model for stratified environmental flows

    NASA Astrophysics Data System (ADS)

    Bouffard, Damien; Boegman, Leon

    2013-06-01

    The vertical diffusivity of density, Kρ, regulates ocean circulation, climate and coastal water quality. Kρ is difficult to measure and model in these stratified turbulent flows, resulting in the need for the development of Kρ parameterizations from more readily measurable flow quantities. Typically, Kρ is parameterized from turbulent temperature fluctuations using the Osborn-Cox model or from the buoyancy frequency, N, kinematic viscosity, ν, and the rate of dissipation of turbulent kinetic energy, ɛ, using the Osborn model. More recently, Shih et al. (2005, J. Fluid Mech. 525: 193-214) proposed a laboratory scale parameterization for Kρ, at Prandtl number (ratio of the viscosity over the molecular diffusivity) Pr = 0.7, in terms of the turbulence intensity parameter, Re=ɛ/(νN), which is the ratio between the destabilizing effect of turbulence to the stabilizing effects of stratification and viscosity. In the present study, we extend the SKIF parameterization, against extensive sets of published data, over 0.7 < Pr < 700 and validate it at field scale. Our results show that the SKIF model must be modified to include a new Buoyancy-controlled mixing regime, between the Molecular and Transitional regimes, where Kρ is captured using the molecular diffusivity and Osborn model, respectively. The Buoyancy-controlled regime occurs over 10Pr

  2. Intercomparison of Martian Lower Atmosphere Simulated Using Different Planetary Boundary Layer Parameterization Schemes

    NASA Technical Reports Server (NTRS)

    Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.

    2015-01-01

    We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.

  3. Parameterization-based tracking for the P2 experiment

    NASA Astrophysics Data System (ADS)

    Sorokin, Iurii

    2017-08-01

    The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.

  4. Empirical parameterization of setup, swash, and runup

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.

    2006-01-01

    Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.

  5. The impact of structural uncertainty on cost-effectiveness models for adjuvant endocrine breast cancer treatments: the need for disease-specific model standardization and improved guidance.

    PubMed

    Frederix, Gerardus W J; van Hasselt, Johan G C; Schellens, Jan H M; Hövels, Anke M; Raaijmakers, Jan A M; Huitema, Alwin D R; Severens, Johan L

    2014-01-01

    Structural uncertainty relates to differences in model structure and parameterization. For many published health economic analyses in oncology, substantial differences in model structure exist, leading to differences in analysis outcomes and potentially impacting decision-making processes. The objectives of this analysis were (1) to identify differences in model structure and parameterization for cost-effectiveness analyses (CEAs) comparing tamoxifen and anastrazole for adjuvant breast cancer (ABC) treatment; and (2) to quantify the impact of these differences on analysis outcome metrics. The analysis consisted of four steps: (1) review of the literature for identification of eligible CEAs; (2) definition and implementation of a base model structure, which included the core structural components for all identified CEAs; (3) definition and implementation of changes or additions in the base model structure or parameterization; and (4) quantification of the impact of changes in model structure or parameterizations on the analysis outcome metrics life-years gained (LYG), incremental costs (IC) and the incremental cost-effectiveness ratio (ICER). Eleven CEA analyses comparing anastrazole and tamoxifen as ABC treatment were identified. The base model consisted of the following health states: (1) on treatment; (2) off treatment; (3) local recurrence; (4) metastatic disease; (5) death due to breast cancer; and (6) death due to other causes. The base model estimates of anastrazole versus tamoxifen for the LYG, IC and ICER were 0.263 years, €3,647 and €13,868/LYG, respectively. In the published models that were evaluated, differences in model structure included the addition of different recurrence health states, and associated transition rates were identified. Differences in parameterization were related to the incidences of recurrence, local recurrence to metastatic disease, and metastatic disease to death. The separate impact of these model components on the LYG ranged from 0.207 to 0.356 years, while incremental costs ranged from €3,490 to €3,714 and ICERs ranged from €9,804/LYG to €17,966/LYG. When we re-analyzed the published CEAs in our framework by including their respective model properties, the LYG ranged from 0.207 to 0.383 years, IC ranged from €3,556 to €3,731 and ICERs ranged from €9,683/LYG to €17,570/LYG. Differences in model structure and parameterization lead to substantial differences in analysis outcome metrics. This analysis supports the need for more guidance regarding structural uncertainty and the use of standardized disease-specific models for health economic analyses of adjuvant endocrine breast cancer therapies. The developed approach in the current analysis could potentially serve as a template for further evaluations of structural uncertainty and development of disease-specific models.

  6. On the implicit density based OpenFOAM solver for turbulent compressible flows

    NASA Astrophysics Data System (ADS)

    Fürst, Jiří

    The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.

  7. Land surface hydrology parameterization for atmospheric general circulation models including subgrid scale spatial variability

    NASA Technical Reports Server (NTRS)

    Entekhabi, D.; Eagleson, P. S.

    1989-01-01

    Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.

  8. LES Modeling of Lateral Dispersion in the Ocean on Scales of 10 m to 10 km

    DTIC Science & Technology

    2015-10-20

    ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...www.fields.utoronto.ca/video-archive/static/2013/06/166-1766/mergedvideo.ogv) and at the Nonlinear Effects in Internal Waves Conference held at Cornell University

  9. The effects of ground hydrology on climate sensitivity to solar constant variations

    NASA Technical Reports Server (NTRS)

    Chou, S. H.; Curran, R. J.; Ohring, G.

    1979-01-01

    The effects of two different evaporation parameterizations on the climate sensitivity to solar constant variations are investigated by using a zonally averaged climate model. The model is based on a two-level quasi-geostrophic zonally averaged annual mean model. One of the evaporation parameterizations tested is a nonlinear formulation with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface. The other is the linear formulation with the Bowen ratio essentially determined by the prescribed linear coefficient.

  10. Parameterization and Validation of an Integrated Electro-Thermal LFP Battery Model

    DTIC Science & Technology

    2012-01-01

    integrated electro- thermal model for an A123 26650 LiFePO4 battery is presented. The electrical dynamics of the cell are described by an equivalent...the parameterization of an integrated electro-thermal model for an A123 26650 LiFePO4 battery is presented. The electrical dynamics of the cell are...the average of the charge and discharge curves taken at very low current (C/20), since the LiFePO4 cell chemistry is known to yield a hysteresis effect

  11. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    NASA Astrophysics Data System (ADS)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  12. Seeing the Wood for the Trees: Applying the dual-memory system model to investigate expert teachers' observational skills in natural ecological learning environments

    NASA Astrophysics Data System (ADS)

    Stolpe, Karin; Björklund, Lars

    2012-01-01

    This study aims to investigate two expert ecology teachers' ability to attend to essential details in a complex environment during a field excursion, as well as how they teach this ability to their students. In applying a cognitive dual-memory system model for learning, we also suggest a rationale for their behaviour. The model implies two separate memory systems: the implicit, non-conscious, non-declarative system and the explicit, conscious, declarative system. This model provided the starting point for the research design. However, it was revised from the empirical findings supported by new theoretical insights. The teachers were video and audio recorded during their excursion and interviewed in a stimulated recall setting afterwards. The data were qualitatively analysed using the dual-memory system model. The results show that the teachers used holistic pattern recognition in their own identification of natural objects. However, teachers' main strategy to teach this ability is to give the students explicit rules or specific characteristics. According to the dual-memory system model the holistic pattern recognition is processed in the implicit memory system as a non-conscious match with earlier experienced situations. We suggest that this implicit pattern matching serves as an explanation for teachers' ecological and teaching observational skills. Another function of the implicit memory system is its ability to control automatic behaviour and non-conscious decision-making. The teachers offer the students firsthand sensory experiences which provide a prerequisite for the formation of implicit memories that provides a foundation for expertise.

  13. Implicit and explicit weight bias in a national sample of 4,732 medical students: the medical student CHANGES study.

    PubMed

    Phelan, Sean M; Dovidio, John F; Puhl, Rebecca M; Burgess, Diana J; Nelson, David B; Yeazel, Mark W; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-04-01

    To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. A web-based survey was completed by 4,732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bias: a feeling thermometer and the anti-fat attitudes test. A majority of students exhibited implicit (74%) and explicit (67%) weight bias. Implicit weight bias scores were comparable to reported bias against racial minorities. Explicit attitudes were more negative toward obese people than toward racial minorities, gays, lesbians, and poor people. In multivariate regression models, implicit and explicit weight bias was predicted by lower BMI, male sex, and non-Black race. Either implicit or explicit bias was also predicted by age, SES, country of birth, and specialty choice. Implicit and explicit weight bias is common among 1st year medical students, and varies across student factors. Future research should assess implications of biases and test interventions to reduce their impact. Copyright © 2013 The Obesity Society.

  14. Responses of Mixed-Phase Cloud Condensates and Cloud Radiative Effects to Ice Nucleating Particle Concentrations in NCAR CAM5 and DOE ACME Climate Models

    NASA Astrophysics Data System (ADS)

    Liu, X.; Shi, Y.; Wu, M.; Zhang, K.

    2017-12-01

    Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.

  15. Evaluation of urban surface parameterizations in the WRF model using measurements during the Texas Air Quality Study 2006 field campaign

    NASA Astrophysics Data System (ADS)

    Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.

    2011-03-01

    The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.

  16. Statistical properties of the normalized ice particle size distribution

    NASA Astrophysics Data System (ADS)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000) parameterization. These new parameterizations are believed to better represent particle size at global scale, owing to a better representativity of the in situ microphysical database used to derive it. We then evaluated the potential of a direct N*0-Dm relationship. While the model parameterized by temperature produces strong errors on the cloud parameters, the N*0-Dm model parameterized by radar reflectivity produces accurate cloud parameters (less than 3% bias and 16% standard deviation). This result implies that the cloud parameters can be estimated from the estimate of only one parameter of the normalized PSD (N*0 or Dm) and a radar reflectivity measurement.

  17. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  18. Intercomparison of methods of coupling between convection and large-scale circulation: 2. Comparison over nonuniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2016-03-18

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  19. Intercomparison of methods of coupling between convection and large‐scale circulation: 2. Comparison over nonuniform surface conditions

    PubMed Central

    Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.

    2016-01-01

    Abstract As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large‐scale dynamics in a set of cloud‐resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative‐convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large‐scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column‐relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large‐scale velocity profiles which are smoother and less top‐heavy compared to those produced by the WTG simulations. These large‐scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two‐way feedback between convection and the large‐scale circulation. PMID:27642501

  20. WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'

    NASA Astrophysics Data System (ADS)

    Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne

    2015-10-01

    Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.

  1. Towards a more efficient and robust representation of subsurface hydrological processes in Earth System Models

    NASA Astrophysics Data System (ADS)

    Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.

    2017-12-01

    Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.

  2. Parameterized and resolved Southern Ocean eddy compensation

    NASA Astrophysics Data System (ADS)

    Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman

    2018-04-01

    The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.

  3. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  4. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  5. Implicit Learning of Recursive Context-Free Grammars

    PubMed Central

    Rohrmeier, Martin; Fu, Qiufang; Dienes, Zoltan

    2012-01-01

    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning. PMID:23094021

  6. Evaluation of Simulated Marine Aerosol Production Using the WaveWatchIII Prognostic Wave Model Coupled to the Community Atmosphere Model within the Community Earth System Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, M. S.; Keene, William C.; Zhang, J.

    2016-11-08

    Primary marine aerosol (PMA) is emitted into the atmosphere via breaking wind waves on the ocean surface. Most parameterizations of PMA emissions use 10-meter wind speed as a proxy for wave action. This investigation coupled the 3 rd generation prognostic WAVEWATCH-III wind-wave model within a coupled Earth system model (ESM) to drive PMA production using wave energy dissipation rate – analogous to whitecapping – in place of 10-meter wind speed. The wind speed parameterization did not capture basin-scale variability in relations between wind and wave fields. Overall, the wave parameterization did not improve comparison between simulated versus measured AOD ormore » Na +, thus highlighting large remaining uncertainties in model physics. Results confirm the efficacy of prognostic wind-wave models for air-sea exchange studies coupled with laboratory- and field-based characterizations of the primary physical drivers of PMA production. No discernible correlations were evident between simulated PMA fields and observed chlorophyll or sea surface temperature.« less

  7. A Comparison of Perturbed Initial Conditions and Multiphysics Ensembles in a Severe Weather Episode in Spain

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain

    2012-01-01

    Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.

  8. Implicit and Explicit Associations with Erotic Stimuli in Women with and Without Sexual Problems.

    PubMed

    van Lankveld, Jacques J D M; Bandell, Myrthe; Bastin-Hurek, Eva; van Beurden, Myra; Araz, Suzan

    2018-02-20

    Conceptual models of sexual functioning have suggested a major role for implicit cognitive processing in sexual functioning. The present study aimed to investigate implicit and explicit cognition in sexual functioning in women. Gynecological patients with (N = 38) and without self-reported sexual problems (N = 41) were compared. Participants performed two Single-Target Implicit Association Tests (ST-IAT), measuring the implicit association of visual erotic stimuli with attributes representing, respectively, valence and motivation. Participants also rated the erotic pictures that were shown in the ST-IATs on the dimensions of valence, attractiveness, and sexual excitement, to assess their explicit associations with these erotic stimuli. Participants completed the Female Sexual Functioning Index and the Female Sexual Distress Scale for continuous measures of sexual functioning, and the Hospital Anxiety and Depression Scale to assess depressive symptoms. Compared to nonsymptomatic women, women with sexual problems were found to show more negative implicit associations of erotic stimuli with wanting (implicit sexual motivation). Across both groups, stronger implicit associations of erotic stimuli with wanting predicted higher level of sexual functioning. More positive explicit ratings of erotic stimuli predicted lower level of sexual distress across both groups.

  9. Testing the cognitive catalyst model of rumination with explicit and implicit cognitive content.

    PubMed

    Sova, Christopher C; Roberts, John E

    2018-06-01

    The cognitive catalyst model posits that rumination and negative cognitive content, such as negative schema, interact to predict depressive affect. Past research has found support for this model using explicit measures of negative cognitive content such as self-report measures of trait self-esteem and dysfunctional attitudes. The present study tested whether these findings would extend to implicit measures of negative cognitive content such as implicit self-esteem, and whether effects would depend on initial mood state and history of depression. Sixty-one undergraduate students selected on the basis of depression history (27 previously depressed; 34 never depressed) completed explicit and implicit measures of negative cognitive content prior to random assignment to a rumination induction followed by a distraction induction or vice versa. Dysphoric affect was measured both before and after these inductions. Analyses revealed that explicit measures, but not implicit measures, interacted with rumination to predict change in dysphoric affect, and these interactions were further moderated by baseline levels of dysphoria. Limitations include the small nonclinical sample and use of a self-report measure of depression history. These findings suggest that rumination amplifies the association between explicit negative cognitive content and depressive affect primarily among people who are already experiencing sad mood. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Implicit moral evaluations: A multinomial modeling approach.

    PubMed

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An Overview of Numerical Weather Prediction on Various Scales

    NASA Astrophysics Data System (ADS)

    Bao, J.-W.

    2009-04-01

    The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.

  12. The Relations between Implicit Intelligence Beliefs, Autonomous Academic Motivation, and School Persistence Intentions: A Mediation Model

    ERIC Educational Resources Information Center

    Renaud-Dubé, Andréanne; Guay, Frédéric; Talbot, Denis; Taylor, Geneviève; Koestner, Richard

    2015-01-01

    This study attempts to test a model in which the relation between implicit theories of intelligence and students' school persistence intentions are mediated by intrinsic, identified, introjected, and external regulations. Six hundred and fifty students from a high school were surveyed. Contrary to expectations, results from ESEM analyses indicated…

  13. Pion and Kaon Lab Frame Differential Cross Sections for Intermediate Energy Nucleus-Nucleus Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.

    2008-01-01

    Space radiation transport codes require accurate models for hadron production in intermediate energy nucleus-nucleus collisions. Codes require cross sections to be written in terms of lab frame variables and it is important to be able to verify models against experimental data in the lab frame. Several models are compared to lab frame data. It is found that models based on algebraic parameterizations are unable to describe intermediate energy differential cross section data. However, simple thermal model parameterizations, when appropriately transformed from the center of momentum to the lab frame, are able to account for the data.

  14. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  15. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  16. Topside Electron Density Representations for Middle and High Latitudes: A Topside Parameterization for E-CHAIM Based On the NeQuick

    NASA Astrophysics Data System (ADS)

    Themens, David R.; Jayachandran, P. T.; Bilitza, Dieter; Erickson, Philip J.; Häggström, Ingemar; Lyashenko, Mykhaylo V.; Reid, Benjamin; Varney, Roger H.; Pustovalova, Ljubov

    2018-02-01

    In this study, we present a topside model representation to be used by the Empirical Canadian High Arctic Ionospheric Model (E-CHAIM). In the process of this, we also present a comprehensive evaluation of the NeQuick's, and by extension the International Reference Ionosphere's, topside electron density model for middle and high latitudes in the Northern Hemisphere. Using data gathered from all available incoherent scatter radars, topside sounders, and Global Navigation Satellite System Radio Occultation satellites, we show that the current NeQuick parameterization suboptimally represents the shape of the topside electron density profile at these latitudes and performs poorly in the representation of seasonal and solar cycle variations of the topside scale thickness. Despite this, the simple, one variable, NeQuick model is a powerful tool for modeling the topside ionosphere. By refitting the parameters that define the maximum topside scale thickness and the rate of increase of the scale height within the NeQuick topside model function, r and g, respectively, and refitting the model's parameterization of the scale height at the F region peak, H0, we find considerable improvement in the NeQuick's ability to represent the topside shape and behavior. Building on these results, we present a new topside model extension of the E-CHAIM based on the revised NeQuick function. Overall, root-mean-square errors in topside electron density are improved over the traditional International Reference Ionosphere/NeQuick topside by 31% for a new NeQuick parameterization and by 36% for a newly proposed topside for E-CHAIM.

  17. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  18. Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2018-01-01

    Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.

  19. Algorithmically scalable block preconditioner for fully implicit shallow-water equations in CAM-SE

    DOE PAGES

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    2014-10-19

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  20. Training Implicit Social Anxiety Associations: An Experimental Intervention

    PubMed Central

    Clerkin, Elise M.; Teachman, Bethany A.

    2010-01-01

    The current study investigates an experimental anxiety reduction intervention among a highly socially anxious sample (N=108; n=36 per Condition; 80 women). Using a conditioning paradigm, our goal was to modify implicit social anxiety associations to directly test the premise from cognitive models that biased cognitive processing may be causally related to anxious responding. Participants were trained to preferentially process non-threatening information through repeated pairings of self-relevant stimuli and faces indicating positive social feedback. As expected, participants in this positive training condition (relative to our two control conditions) displayed less negative implicit associations following training, and were more likely to complete an impromptu speech (though they did not report less anxiety during the speech). These findings offer partial support for cognitive models and indicate that implicit associations are not only correlated with social anxiety, they may be causally related to anxiety reduction as well. PMID:20102788

  1. A multi-dimensional nonlinearly implicit, electromagnetic Vlasov-Darwin particle-in-cell (PIC) algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacón, Luis; CoCoMans Team

    2014-10-01

    For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.

  2. Training implicit social anxiety associations: an experimental intervention.

    PubMed

    Clerkin, Elise M; Teachman, Bethany A

    2010-04-01

    The current study investigates an experimental anxiety reduction intervention among a highly socially anxious sample (N=108; n=36 per Condition; 80 women). Using a conditioning paradigm, our goal was to modify implicit social anxiety associations to directly test the premise from cognitive models that biased cognitive processing may be causally related to anxious responding. Participants were trained to preferentially process non-threatening information through repeated pairings of self-relevant stimuli and faces indicating positive social feedback. As expected, participants in this positive training condition (relative to our two control conditions) displayed less negative implicit associations following training, and were more likely to complete an impromptu speech (though they did not report less anxiety during the speech). These findings offer partial support for cognitive models and indicate that implicit associations are not only correlated with social anxiety, they may be causally related to anxiety reduction as well. (c) 2010 Elsevier Ltd. All rights reserved.

  3. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example

    PubMed Central

    Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F.; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C.

    2017-01-01

    Background Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Objective Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Methods Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson’s disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Results Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Conclusion Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation. PMID:28441410

  4. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  5. A gradient enhanced plasticity-damage microplane model for concrete

    NASA Astrophysics Data System (ADS)

    Zreid, Imadeddin; Kaliske, Michael

    2018-03-01

    Computational modeling of concrete poses two main types of challenges. The first is the mathematical description of local response for such a heterogeneous material under all stress states, and the second is the stability and efficiency of the numerical implementation in finite element codes. The paper at hand presents a comprehensive approach addressing both issues. Adopting the microplane theory, a combined plasticity-damage model is formulated and regularized by an implicit gradient enhancement. The plasticity part introduces a new microplane smooth 3-surface cap yield function, which provides a stable numerical solution within an implicit finite element algorithm. The damage part utilizes a split, which can describe the transition of loading between tension and compression. Regularization of the model by the implicit gradient approach eliminates the mesh sensitivity and numerical instabilities. Identification methods for model parameters are proposed and several numerical examples of plain and reinforced concrete are carried out for illustration.

  6. Numerical solution of 3D Navier-Stokes equations with upwind implicit schemes

    NASA Technical Reports Server (NTRS)

    Marx, Yves P.

    1990-01-01

    An upwind MUSCL type implicit scheme for the three-dimensional Navier-Stokes equations is presented. Comparison between different approximate Riemann solvers (Roe and Osher) are performed and the influence of the reconstructions schemes on the accuracy of the solution as well as on the convergence of the method is studied. A new limiter is introduced in order to remove the problems usually associated with non-linear upwind schemes. The implementation of a diagonal upwind implicit operator for the three-dimensional Navier-Stokes equations is also discussed. Finally the turbulence modeling is assessed. Good prediction of separated flows are demonstrated if a non-equilibrium turbulence model is used.

  7. Fourier-Legendre spectral methods for incompressible channel flow

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Hussaini, M. Y.

    1984-01-01

    An iterative collocation technique is described for modeling implicit viscosity in three-dimensional incompressible wall bounded shear flow. The viscosity can vary temporally and in the vertical direction. Channel flow is modeled with a Fourier-Legendre approximation and the mean streamwise advection is treated implicitly. Explicit terms are handled with an Adams-Bashforth method to increase the allowable time-step for calculation of the implicit terms. The algorithm is applied to low amplitude unstable waves in a plane Poiseuille flow at an Re of 7500. Comparisons are made between results using the Legendre method and with Chebyshev polynomials. Comparable accuracy is obtained for the perturbation kinetic energy predicted using both discretizations.

  8. Sea breeze: Induced mesoscale systems and severe weather

    NASA Technical Reports Server (NTRS)

    Nicholls, M. E.; Pielke, R. A.; Cotton, W. R.

    1990-01-01

    Sea-breeze-deep convective interactions over the Florida peninsula were investigated using a cloud/mesoscale numerical model. The objective was to gain a better understanding of sea-breeze and deep convective interactions over the Florida peninsula using a high resolution convectively explicit model and to use these results to evaluate convective parameterization schemes. A 3-D numerical investigation of Florida convection was completed. The Kuo and Fritsch-Chappell parameterization schemes are summarized and evaluated.

  9. Using Field and Satellite Measurements to Improve Snow and Riming Processes in Cloud Resolving Models

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Molthan, Andrew L.

    2013-01-01

    The representation of clouds in climate and weather models is a driver in forecast uncertainty. Cloud microphysics parameterizations are challenged by having to represent a diverse range of ice species. Key characteristics of predicted ice species include habit and fall speed, and complex interactions that result from mixed-phased processes like riming. Our proposed activity leverages Global Precipitation Measurement (GPM) Mission ground validation studies to improve parameterizations

  10. A comparison of physician implicit racial bias towards adults versus children

    PubMed Central

    Johnson, Tiffani J.; Winger, Daniel G.; Hickey, Robert W.; Switzer, Galen E.; Miller, Elizabeth; Nguyen, Margaret B.; Saladino, Richard A.; Hausmann, Leslie R. M.

    2016-01-01

    Background and Objectives The general population and most physicians have implicit racial bias against black adults. Pediatricians also have implicit bias against black adults, albeit less than other specialties. There is no published research on the implicit racial attitudes of pediatricians or other physicians towards children. Our objectives were to compare implicit racial bias towards adults versus children among resident physicians working in a pediatric emergency department (ED), and to assess whether bias varied by specialty (pediatrics, emergency medicine, or other), gender, race, age, and year of training. Methods We measured implicit racial bias of residents before a pediatric ED shift using the Adult and Child Race Implicit Association Tests (IATs). Generalized linear models compared Adult and Child IAT scores and determined the association of participant demographics with Adult and Child IAT scores. Results Among 91 residents, we found moderate pro-white/anti-black bias on both the Adult (M=0.49, SD=0.34) and Child Race IAT (M=0.55, SD=0.37). There was no significant difference between Adult and Child Race IAT scores (difference=0.06, p=0.15). Implicit bias was not associated with resident demographic characteristics, including specialty. Conclusions This is the first study demonstrating that resident physicians have implicit racial bias against black children, similar to levels of bias against black adults. Bias in our study did not vary by resident demographic characteristics, including specialty, suggesting that pediatric residents are as susceptible as other physicians to implicit bias. Future studies are needed to explore how physicians’ implicit attitudes towards parents and children may impact inequities in pediatric healthcare. PMID:27620844

  11. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  12. Analysis of Implicit Uncertain Systems. Part 1: Theoretical Framework

    DTIC Science & Technology

    1994-12-07

    Analysis of Implicit Uncertain Systems Part I: Theoretical Framework Fernando Paganini * John Doyle 1 December 7, 1994 Abst rac t This paper...Analysis of Implicit Uncertain Systems Part I: Theoretical Framework 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...model and a number of constraints relevant to the analysis problem under consideration. In Part I of this paper we propose a theoretical framework which

  13. Enhanced Representation of Soil NO Emissions in the Community Multiscale Air Quality (CMAQ) Model Version 5.0.2

    NASA Technical Reports Server (NTRS)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-01-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  14. Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations

    NASA Technical Reports Server (NTRS)

    Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide

    2017-01-01

    Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only better performances of historical simulations but also more robust and confidential future projections of hydrological changes under a changing environment.

  15. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2017-04-01

    In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1

  16. Aviation NOx-induced CH4 effect: Fixed mixing ratio boundary conditions versus flux boundary conditions

    NASA Astrophysics Data System (ADS)

    Khodayari, Arezoo; Olsen, Seth C.; Wuebbles, Donald J.; Phoenix, Daniel B.

    2015-07-01

    Atmospheric chemistry-climate models are often used to calculate the effect of aviation NOx emissions on atmospheric ozone (O3) and methane (CH4). Due to the long (∼10 yr) atmospheric lifetime of methane, model simulations must be run for long time periods, typically for more than 40 simulation years, to reach steady-state if using CH4 emission fluxes. Because of the computational expense of such long runs, studies have traditionally used specified CH4 mixing ratio lower boundary conditions (BCs) and then applied a simple parameterization based on the change in CH4 lifetime between the control and NOx-perturbed simulations to estimate the change in CH4 concentration induced by NOx emissions. In this parameterization a feedback factor (typically a value of 1.4) is used to account for the feedback of CH4 concentrations on its lifetime. Modeling studies comparing simulations using CH4 surface fluxes and fixed mixing ratio BCs are used to examine the validity of this parameterization. The latest version of the Community Earth System Model (CESM), with the CAM5 atmospheric model, was used for this study. Aviation NOx emissions for 2006 were obtained from the AEDT (Aviation Environmental Design Tool) global commercial aircraft emissions. Results show a 31.4 ppb change in CH4 concentration when estimated using the parameterization and a 1.4 feedback factor, and a 28.9 ppb change when the concentration was directly calculated in the CH4 flux simulations. The model calculated value for CH4 feedback on its own lifetime agrees well with the 1.4 feedback factor. Systematic comparisons between the separate runs indicated that the parameterization technique overestimates the CH4 concentration by 8.6%. Therefore, it is concluded that the estimation technique is good to within ∼10% and decreases the computational requirements in our simulations by nearly a factor of 8.

  17. Assessment of two physical parameterization schemes for desert dust emissions in an atmospheric chemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.

    2012-04-01

    Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.

  18. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  19. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    NASA Technical Reports Server (NTRS)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  20. Alcohol-Approach Inclinations and Drinking Identity as Predictors of Behavioral Economic Demand for Alcohol

    PubMed Central

    Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.

    2016-01-01

    Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444

  1. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  2. The implementation and validation of improved landsurface hydrology in an atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    Landsurface hydrological parameterizations are implemented in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: (1) runoff and evapotranspiration functions that include the effects of subgrid scale spatial variability and use physically based equations of hydrologic flux at the soil surface, and (2) a realistic soil moisture diffusion scheme for the movement of water in the soil column. A one dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three dimensional GCM. Results of the final simulation with the GISS GCM and the new landsurface hydrology indicate that the runoff rate, especially in the tropics is significantly improved. As a result, the remaining components of the heat and moisture balance show comparable improvements when compared to observations. The validation of model results is carried from the large global (ocean and landsurface) scale, to the zonal, continental, and finally the finer river basin scales.

  3. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  4. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  5. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  6. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  7. Multiresponse modeling of variably saturated flow and isotope tracer transport for a hillslope experiment at the Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter

    2016-10-01

    This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.

  8. The cloud-phase feedback in the Super-parameterized Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Burt, M. A.; Randall, D. A.

    2016-12-01

    Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.

  9. Short-term Wind Forecasting at Wind Farms using WRF-LES and Actuator Disk Model

    NASA Astrophysics Data System (ADS)

    Kirkil, Gokhan

    2017-04-01

    Short-term wind forecasts are obtained for a wind farm on a mountainous terrain using WRF-LES. Multi-scale simulations are also performed using different PBL parameterizations. Turbines are parameterized using Actuator Disc Model. LES models improved the forecasts. Statistical error analysis is performed and ramp events are analyzed. Complex topography of the study area affects model performance, especially the accuracy of wind forecasts were poor for cross valley-mountain flows. By means of LES, we gain new knowledge about the sources of spatial and temporal variability of wind fluctuations such as the configuration of wind turbines.

  10. Inference of cirrus cloud properties using satellite-observed visible and infrared radiances. I - Parameterization of radiance fields

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Liou, Kuo-Nan; Takano, Yoshihide

    1993-01-01

    The impact of using phase functions for spherical droplets and hexagonal ice crystals to analyze radiances from cirrus is examined. Adding-doubling radiative transfer calculations are employed to compute radiances for different cloud thicknesses and heights over various backgrounds. These radiances are used to develop parameterizations of top-of-the-atmosphere visible reflectance and IR emittance using tables of reflectances as a function of cloud optical depth, viewing and illumination angles, and microphysics. This parameterization, which includes Rayleigh scattering, ozone absorption, variable cloud height, and an anisotropic surface reflectance, reproduces the computed top-of-the-atmosphere reflectances with an accruacy of +/- 6 percent for four microphysical models: 10-micron water droplet, small symmetric crystal, cirrostratus, and cirrus uncinus. The accuracy is twice that of previous models.

  11. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Liu, Yangang

    2014-12-18

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less

  12. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.

    1980-09-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  13. Explicit and implicit cognition: a preliminary test of a dual-process theory of cognitive vulnerability to depression.

    PubMed

    Haeffel, Gerald J; Abramson, Lyn Y; Brazy, Paige C; Shah, James Y; Teachman, Bethany A; Nosek, Brian A

    2007-06-01

    Two studies were conducted to test a dual-process theory of cognitive vulnerability to depression. According to this theory, implicit and explicit cognitive processes have differential effects on depressive reactions to stressful life events. Implicit processes are hypothesized to be critical in determining an individual's immediate affective reaction to stress whereas explicit cognitions are thought to be more involved in long-term depressive reactions. Consistent with hypotheses, the results of study 1 (cross-sectional; N=237) showed that implicit, but not explicit, cognitions predicted immediate affective reactions to a lab stressor. Study 2 (longitudinal; N=251) also supported the dual-process model of cognitive vulnerability to depression. Results showed that both the implicit and explicit measures interacted with life stress to predict prospective changes in depressive symptoms, respectively. However, when both implicit and explicit predictors were entered into a regression equation simultaneously, only the explicit measure interacted with stress to remain a unique predictor of depressive symptoms over the five-week prospective interval.

  14. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    PubMed Central

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  15. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    NASA Astrophysics Data System (ADS)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.

  16. An empirical test of a diffusion model: predicting clouded apollo movements in a novel environment.

    PubMed

    Ovaskainen, Otso; Luoto, Miska; Ikonen, Iiro; Rekola, Hanna; Meyke, Evgeniy; Kuussaari, Mikko

    2008-05-01

    Functional connectivity is a fundamental concept in conservation biology because it sets the level of migration and gene flow among local populations. However, functional connectivity is difficult to measure, largely because it is hard to acquire and analyze movement data from heterogeneous landscapes. Here we apply a Bayesian state-space framework to parameterize a diffusion-based movement model using capture-recapture data on the endangered clouded apollo butterfly. We test whether the model is able to disentangle the inherent movement behavior of the species from landscape structure and sampling artifacts, which is a necessity if the model is to be used to examine how movements depend on landscape structure. We show that this is the case by demonstrating that the model, parameterized with data from a reference landscape, correctly predicts movements in a structurally different landscape. In particular, the model helps to explain why a movement corridor that was constructed as a management measure failed to increase movement among local populations. We illustrate how the parameterized model can be used to derive biologically relevant measures of functional connectivity, thus linking movement data with models of spatial population dynamics.

  17. Biogenetic models of psychopathology, implicit guilt, and mental illness stigma.

    PubMed

    Rüsch, Nicolas; Todd, Andrew R; Bodenhausen, Galen V; Corrigan, Patrick W

    2010-10-30

    Whereas some research suggests that acknowledgment of the role of biogenetic factors in mental illness could reduce mental illness stigma by diminishing perceived responsibility, other research has cautioned that emphasizing biogenetic aspects of mental illness could produce the impression that mental illness is a stable, intrinsic aspect of a person ("genetic essentialism"), increasing the desire for social distance. We assessed genetic and neurobiological causal attributions about mental illness among 85 people with serious mental illness and 50 members of the public. The perceived responsibility of persons with mental illness for their condition, as well as fear and social distance, was assessed by self-report. Automatic associations between Mental Illness and Guilt and between Self and Guilt were measured by the Brief Implicit Association Test. Among the general public, endorsement of biogenetic models was associated with not only less perceived responsibility, but also greater social distance. Among people with mental illness, endorsement of genetic models had only negative correlates: greater explicit fear and stronger implicit self-guilt associations. Genetic models may have unexpected negative consequences for implicit self-concept and explicit attitudes of people with serious mental illness. An exclusive focus on genetic models may therefore be problematic for clinical practice and anti-stigma initiatives. Copyright © 2009 Elsevier Ltd. All rights reserved.

  18. Parameterization of N2O5 Reaction Probabilities on the Surface of Particles Containing Ammonium, Sulfate, and Nitrate

    EPA Science Inventory

    A comprehensive parameterization was developed for the heterogeneous reaction probability (γ) of N2O5 as a function of temperature, relative humidity, particle composition, and phase state, for use in advanced air quality models. The reaction probabilities o...

  19. Merged data models for multi-parameterized querying: Spectral data base meets GIS-based map archive

    NASA Astrophysics Data System (ADS)

    Naß, A.; D'Amore, M.; Helbert, J.

    2017-09-01

    Current and upcoming planetary missions deliver a huge amount of different data (remote sensing data, in-situ data, and derived products). Within this contribution present how different data (bases) can be managed and merged, to enable multi-parameterized querying based on the constant spatial context.

  20. Cloud Microphysics Parameterization in a Shallow Cumulus Cloud Simulated by a Largrangian Cloud Model

    NASA Astrophysics Data System (ADS)

    Oh, D.; Noh, Y.; Hoffmann, F.; Raasch, S.

    2017-12-01

    Lagrangian cloud model (LCM) is a fundamentally new approach of cloud simulation, in which the flow field is simulated by large eddy simulation and droplets are treated as Lagrangian particles undergoing cloud microphysics. LCM enables us to investigate raindrop formation and examine the parameterization of cloud microphysics directly by tracking the history of individual Lagrangian droplets simulated by LCM. Analysis of the magnitude of raindrop formation and the background physical conditions at the moment at which every Lagrangian droplet grows from cloud droplets to raindrops in a shallow cumulus cloud reveals how and under which condition raindrops are formed. It also provides information how autoconversion and accretion appear and evolve within a cloud, and how they are affected by various factors such as cloud water mixing ratio, rain water mixing ratio, aerosol concentration, drop size distribution, and dissipation rate. Based on these results, the parameterizations of autoconversion and accretion, such as Kessler (1969), Tripoli and Cotton (1980), Beheng (1994), and Kharioutdonov and Kogan (2000), are examined, and the modifications to improve the parameterizations are proposed.

  1. Editor’s message: Groundwater modeling fantasies - Part 1, adrift in the details

    USGS Publications Warehouse

    Voss, Clifford I.

    2011-01-01

    Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it. …Simplicity does not precede complexity, but follows it. (Epigrams in Programming by Alan Perlis, a computer scientist; Perlis 1982).A doctoral student creating a groundwater model of a regional aquifer put individual circular regions around data points where he had hydraulic head measurements, so that each region’s parameter values could be adjusted to get perfect fit with the measurement at that point. Nearly every measurement point had its own parameter-value region. After calibration, the student was satisfied because his model correctly reproduced all of his data. Did he really get the true field values of parameters in this manner? Did this approach result in a realistic, meaningful and useful groundwater model?—truly doubtful. Is this story a sign of a common style of educating hydrogeology students these days? Where this is the case, major changes are needed to add back ‘common-sense hydrogeology’ to the curriculum. Worse, this type of modeling approach has become an industry trend in application of groundwater models to real systems, encouraged by the advent of automatic model calibration software that has no problem providing numbers for as many parameter value estimates as desired. Just because a computer program can easily create such values does not mean that they are in any sense useful—but unquestioning practitioners are happy to follow such software developments, perhaps because of an implied promise that highly parameterized models, here referred to as ‘complex’, are somehow superior. This and other fallacies are implicit in groundwater modeling studies, most usually not acknowledged when presenting results. This two-part Editor’s Message deals with the state of groundwater modeling: part 1 (here) focuses on problems and part 2 (Voss 2011) on prospects.

  2. How Mentor Identity Evolves: Findings From a 10-Year Follow-Up Study of a National Professional Development Program.

    PubMed

    Balmer, Dorene F; Darden, Alix; Chandran, Latha; D'Alessandro, Donna; Gusic, Maryellen E

    2018-02-20

    Despite academic medicine's endorsement of professional development and mentoring, little is known about what junior faculty learn about mentoring in the implicit curriculum of professional development programs, and how their mentor identity evolves in this context. The authors explored what faculty-participants in the Educational Scholars Program implicitly learned about mentoring and how the implicit curriculum affected mentor identity transformation. Semi-structured interviews with 19 of 36 former faculty-participants were conducted in 2016. Consistent with constructivist grounded theory, data collection and analysis overlapped. The authors created initial codes informed by Ibarra's model for identity transformation, iteratively revised codes based on patterns in incoming data, and created visual representations of relationships amongst codes in order to gain a holistic and shared understanding of the data. In the implicit curriculum, faculty-participants learned the importance of having multiple mentors, the value of peer mentors, and the incremental process of becoming a mentor. The authors used Ibarra's model to understand how the implicit curriculum worked to transform mentor identity: faculty-participants reported observing mentors, experimenting with different ways to mentor and to be a mentor, and evaluating themselves as mentors. The Educational Scholars Program's implicit curriculum facilitated faculty-participants taking on a mentor identity via opportunities it afforded to watch mentors, experiment with mentoring, and evaluate self as mentor, key ingredients for professional identity construction. Leaders of professional development programs can develop faculty as mentors by capitalizing on what faculty-participants learn in the implicit curriculum and deliberately structuring post-graduation mentoring opportunities.

  3. Flexible explicit but rigid implicit learning in a visuomotor adaptation task

    PubMed Central

    Bond, Krista M.

    2015-01-01

    There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690

  4. Reactivity to negative affect in smokers: the role of implicit associations and distress tolerance in smoking cessation.

    PubMed

    Cameron, Amy; Reed, Kathleen Palm; Ninnemann, Andrew

    2013-12-01

    Avoidance of negative affect is one motivational factor that explains smoking cessation relapse during cessation attempts. This negative reinforcement model of smoking cessation and relapse has demonstrated the importance of one's ability to tolerate nicotine withdrawal symptoms, particularly negative affect states, in remaining abstinent from smoking. Distress tolerance and implicit associations are two individual constructs that may influence the strength of this relationship. In this pilot study the authors examined implicit associations related to avoidance and negative affect using a modified Implicit Association Test (IAT), a measure designed to examine implicit associations related to negative affect and avoidance, and the relationship of these associations to distress tolerance and smoking relapse. In total, 40 participants were recruited through community flyers as part of a larger smoking cessation study. Participants completed a brief smoking history, behavioral distress tolerance assessments, and the modified IAT. Smoking status was assessed via phone 3days and 6days post-quit date. Results from a Cox proportional hazard model revealed that implicit associations between avoidance and negative affect were significantly negatively correlated with time to relapse after a smoking cessation attempt, whereas the behavioral distress tolerance assessments did not predict time to relapse. This study provides novel information about the cognitive associations that may underlie avoidant behavior in smokers, and may be important for understanding smoking relapse when negative affect states are particularly difficult to tolerate. Authors discuss the importance of implicit associations in understanding smoking relapse and how they can be targeted in treatment. © 2013.

  5. Higher-order hybrid implicit/explicit FDTD time-stepping

    NASA Astrophysics Data System (ADS)

    Tierens, W.

    2016-12-01

    Both partially implicit FDTD methods, and symplectic FDTD methods of high temporal accuracy (3rd or 4th order), are well documented in the literature. In this paper we combine them: we construct a conservative FDTD method which is fourth order accurate in time and is partially implicit. We show that the stability condition for this method depends exclusively on the explicit part, which makes it suitable for use in e.g. modelling wave propagation in plasmas.

  6. Structure and Dynamics of End-to-End Loop Formation of the Penta-Peptide Cys-Ala-Gly-Gln-Trp in Implicit Solvents

    DTIC Science & Technology

    2009-01-01

    implicit solvents on peptide structure and dynamics , we performed extensive molecular dynamics simulations on the penta-peptide Cys-Ala-Gly-Gln-Trp. Two...end-to-end distances and dihedral angles obtained from molecular dynamics simulations with implicit solvent models were in a good agreement with those...to maintain the temperature of the systems. Introduction Molecular dynamics (MD) simulation techniques are widely used to study structure and

  7. Implicit and Explicit Associations with Erotic Stimuli in Sexually Functional and Dysfunctional Men.

    PubMed

    van Lankveld, Jacques; Odekerken, Ingrid; Kok-Verhoeven, Lydia; van Hooren, Susan; de Vries, Peter; van den Hout, Anja; Verboon, Peter

    2015-08-01

    Although conceptual models of sexual functioning have suggested a major role for implicit cognitive processing in sexual functioning, this has thus far, only been investigated in women. The aim of this study was to investigate the role of implicit cognition in sexual functioning in men. Men with (N = 29) and without sexual dysfunction (N = 31) were compared. Participants performed two single-target implicit association tests (ST-IAT), measuring the implicit association of visual erotic stimuli with attributes representing, respectively, valence ('liking') and motivation ('wanting'). Participants also rated the erotic pictures that were shown in the ST-IAT on the dimensions of valence, attractiveness, and sexual excitement to assess their explicit associations with these erotic stimuli. Participants completed the International Index of Erectile Functioning for a continuous measure of sexual functioning. Unexpectedly, compared with sexually functional men, sexually dysfunctional men were found to show stronger implicit associations of erotic stimuli with positive valence than with negative valence. Level of sexual functioning, however, was not predicted by explicit nor implicit associations. Level of sexual distress was predicted by explicit valence ratings, with positive ratings predicting higher levels of sexual distress. Men with and without sexual dysfunction differed significantly with regard to implicit liking. Research recommendations and implications are discussed. © 2015 International Society for Sexual Medicine.

  8. Remote Sensing Protocols for Parameterizing an Individual, Tree-Based, Forest Growth and Yield Model

    DTIC Science & Technology

    2014-09-01

    Leaf-Off Tree Crowns in Small Footprint, High Sampling Density LIDAR Data from Eastern Deciduous Forests in North America.” Remote Sensing of...William A. 2003. “Crown-Diameter Prediction Models for 87 Species of Stand- Grown Trees in the Eastern United States.” Southern Journal of Applied...ER D C/ CE RL T R- 14 -1 8 Base Facilities Environmental Quality Remote Sensing Protocols for Parameterizing an Individual, Tree -Based

  9. Influence of the vertical mixing parameterization on the modeling results of the Arctic Ocean hydrology

    NASA Astrophysics Data System (ADS)

    Iakshina, D. F.; Golubeva, E. N.

    2017-11-01

    The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.

  10. Assessing model uncertainty using hexavalent chromium and ...

    EPA Pesticide Factsheets

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47

  11. Modeling and parameterization of horizontally inhomogeneous cloud radiative properties

    NASA Technical Reports Server (NTRS)

    Welch, R. M.

    1995-01-01

    One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.

  12. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    NASA Astrophysics Data System (ADS)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  13. Mars global reference atmosphere model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie F.

    1992-01-01

    Mars-GRAM is an empirical model that parameterizes the temperature, pressure, density, and wind structure of the Martian atmosphere from the surface through thermospheric altitudes. In the lower atmosphere of Mars, the model is built around parameterizations of height, latitudinal, longitudinal, and seasonal variations of temperature determined from a survey of published measurements from the Mariner and Viking programs. Pressure and density are inferred from the temperature by making use of the hydrostatic and perfect gas laws relationships. For the upper atmosphere, the thermospheric model of Stewart is used. A hydrostatic interpolation routine is used to insure a smooth transition from the lower portion of the model to the Stewart thermospheric model. Other aspects of the model are discussed.

  14. Pedotransfer functions in Earth system science: challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Van Looy, K.; Minasny, B.; Nemes, A.; Verhoef, A.; Weihermueller, L.; Vereecken, H.

    2017-12-01

    We make a stronghold for a new generation of Pedotransfer functions (PTFs) that is currently developed in the different disciplines of Earth system science, offering strong perspectives for improvement of integrated process-based models, from local to global scale applications. PTFs are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. To meet the methodological challenges for a successful application in Earth system modeling, we highlight how PTF development needs to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly capture the spatial heterogeneity of soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration and organic carbon content, root density and vegetation water uptake. We present an outlook and stepwise approach to the development of a comprehensive set of PTFs that can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques and soil information availability provide a true breakthrough for this, yet further improvements are necessary in three domains: 1) the determining of unknown relationships and dealing with uncertainty in Earth system modeling; 2) the step of spatially deploying this knowledge with PTF validation at regional to global scales; and 3) the integration and linking of the complex model parameterizations (coupled parameterization). Integration is an achievable goal we will show.

  15. A specific implicit sequence learning deficit as an underlying cause of dyslexia? Investigating the role of attention in implicit learning tasks.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2017-05-01

    Recently, a general implicit sequence learning deficit was proposed as an underlying cause of dyslexia. This new hypothesis was investigated in the present study by including a number of methodological improvements, for example, the inclusion of appropriate control conditions. The second goal of the study was to explore the role of attentional functioning in implicit and explicit learning tasks. In a 2 × 2 within-subjects design 4 tasks were administered in 30 dyslexic and 38 control children: an implicit and explicit serial reaction time (RT) task and an implicit and explicit contextual cueing task. Attentional functioning was also administered. The entire learning curves of all tasks were analyzed using latent growth curve modeling in order to compare performances between groups and to examine the role of attentional functioning on the learning curves. The amount of implicit learning was similar for both groups. However, the dyslexic group showed slower RTs throughout the entire task. This group difference reduced and became nonsignificant after controlling for attentional functioning. Both implicit learning tasks, but none of the explicit learning tasks, were significantly affected by attentional functioning. Dyslexic children do not suffer from a specific implicit sequence learning deficit. The slower RTs of the dyslexic children throughout the entire implicit sequence learning process are caused by their comorbid attention problems and overall slowness. A key finding of the present study is that, in contrast to what was assumed for a long time, implicit learning relies on attentional resources, perhaps even more than explicit learning does. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Comparison of Physician Implicit Racial Bias Toward Adults Versus Children.

    PubMed

    Johnson, Tiffani J; Winger, Daniel G; Hickey, Robert W; Switzer, Galen E; Miller, Elizabeth; Nguyen, Margaret B; Saladino, Richard A; Hausmann, Leslie R M

    2017-03-01

    The general population and most physicians have implicit racial bias against black adults. Pediatricians also have implicit bias against black adults, albeit less than other specialties. There is no published research on the implicit racial attitudes of pediatricians or other physicians toward children. Our objectives were to compare implicit racial bias toward adults versus children among resident physicians working in a pediatric emergency department, and to assess whether bias varied by specialty (pediatrics, emergency medicine, or other), gender, race, age, and year of training. We measured implicit racial bias of residents before a pediatric emergency department shift using the Adult and Child Race Implicit Association Tests (IATs). Generalized linear models compared Adult and Child IAT scores and determined the association of participant demographics with Adult and Child IAT scores. Among 91 residents, we found moderate pro-white/anti-black bias on both the Adult (mean = 0.49, standard deviation = 0.34) and Child Race IAT (mean = 0.55, standard deviation = 0.37). There was no significant difference between Adult and Child Race IAT scores (difference = 0.06, P = .15). Implicit bias was not associated with resident demographic characteristics, including specialty. This is the first study demonstrating that resident physicians have implicit racial bias against black children, similar to levels of bias against black adults. Bias in our study did not vary by resident demographic characteristics, including specialty, suggesting that pediatric residents are as susceptible as other physicians to implicit bias. Future studies are needed to explore how physicians' implicit attitudes toward parents and children may impact inequities in pediatric health care. Copyright © 2016 Academic Pediatric Association. All rights reserved.

  17. Co-occurrence of social anxiety and depression symptoms in adolescence: differential links with implicit and explicit self-esteem?

    PubMed

    de Jong, P J; Sportel, B E; de Hullu, E; Nauta, M H

    2012-03-01

    Social anxiety and depression often co-occur. As low self-esteem has been identified as a risk factor for both types of symptoms, it may help to explain their co-morbidity. Current dual process models of psychopathology differentiate between explicit and implicit self-esteem. Explicit self-esteem would reflect deliberate self-evaluative processes whereas implicit self-esteem would reflect simple associations in memory. Previous research suggests that low explicit self-esteem is involved in both social anxiety and depression whereas low implicit self-esteem is only involved in social anxiety. We tested whether the association between symptoms of social phobia and depression can indeed be explained by low explicit self-esteem, whereas low implicit self-esteem is only involved in social anxiety. Adolescents during the first stage of secondary education (n=1806) completed the Revised Child Anxiety and Depression Scale (RCADS) to measure symptoms of social anxiety and depression, the Rosenberg Self-Esteem Scale (RSES) to index explicit self-esteem and the Implicit Association Test (IAT) to measure implicit self-esteem. There was a strong association between symptoms of depression and social anxiety that could be largely explained by participants' explicit self-esteem. Only for girls did implicit self-esteem and the interaction between implicit and explicit self-esteem show small cumulative predictive validity for social anxiety, indicating that the association between low implicit self-esteem and social anxiety was most evident for girls with relatively low explicit self-esteem. Implicit self-esteem showed no significant predictive validity for depressive symptoms. The findings support the view that both shared and differential self-evaluative processes are involved in depression and social anxiety.

  18. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  19. Assessing implicit models for nonpolar mean solvation forces: The importance of dispersion and volume terms

    PubMed Central

    Wagoner, Jason A.; Baker, Nathan A.

    2006-01-01

    Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675

  20. Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms☆

    PubMed Central

    Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji

    2016-01-01

    This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517

Top