Sample records for full space parameterization

  1. A General Framework for Thermodynamically Consistent Parameterization and Efficient Sampling of Enzymatic Reactions

    PubMed Central

    Saa, Pedro; Nielsen, Lars K.

    2015-01-01

    Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556

  2. Optimal lattice-structured materials

    DOE PAGES

    Messner, Mark C.

    2016-07-09

    This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less

  3. Kinetic energy spectra, vertical resolution and dissipation in high-resolution atmospheric simulations.

    NASA Astrophysics Data System (ADS)

    Skamarock, W. C.

    2017-12-01

    We have performed week-long full-physics simulations with the MPAS global model at 15 km cell spacing using vertical mesh spacings of 800, 400, 200 and 100 meters in the mid-troposphere through the mid-stratosphere. We find that the horizontal kinetic energy spectra in the upper troposphere and stratosphere does not converge with increasing vertical resolution until we reach 200 meter level spacing. Examination of the solutions indicates that significant inertia-gravity waves are not vertically resolved at the lower vertical resolutions. Diagnostics from the simulations indicate that the primary kinetic energy dissipation results from the vertical mixing within the PBL parameterization and from the gravity-wave drag parameterization, with smaller but significant contributions from damping in the vertical transport scheme and from the horizontal filters in the dynamical core. Most of the kinetic energy dissipation in the free atmosphere occurs within breaking mid-latitude baroclinic waves. We will briefly review these results and their implications for atmospheric model configuration and for atmospheric dynamics, specifically that related to the dynamics associated with the mesoscale kinetic energy spectrum.

  4. The implementation and validation of improved landsurface hydrology in an atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    Landsurface hydrological parameterizations are implemented in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: (1) runoff and evapotranspiration functions that include the effects of subgrid scale spatial variability and use physically based equations of hydrologic flux at the soil surface, and (2) a realistic soil moisture diffusion scheme for the movement of water in the soil column. A one dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three dimensional GCM. Results of the final simulation with the GISS GCM and the new landsurface hydrology indicate that the runoff rate, especially in the tropics is significantly improved. As a result, the remaining components of the heat and moisture balance show comparable improvements when compared to observations. The validation of model results is carried from the large global (ocean and landsurface) scale, to the zonal, continental, and finally the finer river basin scales.

  5. The implementation and validation of improved land-surface hydrology in an atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.

    1993-01-01

    New land-surface hydrologic parameterizations are implemented into the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: 1) runoff and evapotranspiration functions that include the effects of subgrid-scale spatial variability and use physically based equations of hydrologic flux at the soil surface and 2) a realistic soil moisture diffusion scheme for the movement of water and root sink in the soil column. A one-dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three-dimensional GCM. Results of the final simulation with the GISS GCM and the new land-surface hydrology indicate that the runoff rate, especially in the tropics, is significantly improved. As a result, the remaining components of the heat and moisture balance show similar improvements when compared to observations. The validation of model results is carried from the large global (ocean and land-surface) scale to the zonal, continental, and finally the regional river basin scales.

  6. The Separate Physics and Dynamics Experiment (SPADE) framework for determining resolution awareness: A case study of microphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Ma, Po-Lun; Xiao, Heng

    2013-08-29

    The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less

  7. Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Townsend, Lawrence W.

    1992-01-01

    Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.

  8. A Review on Regional Convection-Permitting Climate Modeling: Demonstrations, Prospects, and Challenges

    NASA Astrophysics Data System (ADS)

    Prein, A. F.; Langhans, W.; Fosser, G.; Ferrone, A.; Ban, N.; Goergen, K.; Keller, M.; Tölle, M.; Gutjahr, O.; Feser, F.; Brisson, E.; Kollet, S. J.; Schmidli, J.; Van Lipzig, N. P. M.; Leung, L. R.

    2015-12-01

    Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. We aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.

  9. A review on regional convection-permitting climate modeling: Demonstrations, prospects, and challenges.

    PubMed

    Prein, Andreas F; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P M; Leung, Ruby

    2015-06-01

    Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.

  10. A review on regional convection-permitting climate modeling: Demonstrations, prospects, and challenges

    NASA Astrophysics Data System (ADS)

    Prein, Andreas F.; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P. M.; Leung, Ruby

    2015-06-01

    Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.

  11. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgansen, K.A.; Pin, F.G.

    A new method for mitigating unexpected impact of a redundant manipulator with an object in its environment is presented. Kinematic constraints are utilized with the recently developed method known as Full Space Parameterization (FSP). System performance criterion and constraints are changed at impact to return the end effector to the point of impact and halt the arm. Since large joint accelerations could occur as the manipulator is halted, joint acceleration bounds are imposed to simulate physical actuator limitations. Simulation results are presented for the case of a simple redundant planar manipulator.

  13. Parameterized spectral distributions for meson production in proton-proton collisions

    NASA Technical Reports Server (NTRS)

    Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.

    1995-01-01

    Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.

  14. Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications

    NASA Astrophysics Data System (ADS)

    Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.

    2017-12-01

    Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .

  15. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5

    EPA Science Inventory

    The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...

  16. Parameterized Cross Sections for Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.

    2000-01-01

    An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.

  17. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS

    EPA Science Inventory

    The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...

  18. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  19. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Höft, J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  20. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  1. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  2. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  3. A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.

    2010-12-01

    A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.

  4. Develop and Test Coupled Physical Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM

    DTIC Science & Technology

    2013-09-30

    Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM W. Erick Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529...Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  5. Subgrid-scale parameterization and low-frequency variability: a response theory approach

    NASA Astrophysics Data System (ADS)

    Demaeyer, Jonathan; Vannitsem, Stéphane

    2016-04-01

    Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.

  6. Comparison of different objective functions for parameterization of simple respiration models

    Treesearch

    M.T. van Wijk; B. van Putten; D.Y. Hollinger; A.D. Richardson

    2008-01-01

    The eddy covariance measurements of carbon dioxide fluxes collected around the world offer a rich source for detailed data analysis. Simple, aggregated models are attractive tools for gap filling, budget calculation, and upscaling in space and time. Key in the application of these models is their parameterization and a robust estimate of the uncertainty and reliability...

  7. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    NASA Astrophysics Data System (ADS)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.

  8. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    USGS Publications Warehouse

    White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John

    2017-01-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.

  9. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone

    NASA Astrophysics Data System (ADS)

    Filipot, J.-F.; Ardhuin, F.

    2012-11-01

    A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.

  10. Design-Optimization Of Cylindrical, Layered Composite Structures Using Efficient Laminate Parameterization

    NASA Astrophysics Data System (ADS)

    Monicke, A.; Katajisto, H.; Leroy, M.; Petermann, N.; Kere, P.; Perillo, M.

    2012-07-01

    For many years, layered composites have proven essential for the successful design of high-performance space structures, such as launchers or satellites. A generic cylindrical composite structure for a launcher application was optimized with respect to objectives and constraints typical for space applications. The studies included the structural stability, laminate load response and failure analyses. Several types of cylinders (with and without stiffeners) were considered and optimized using different lay-up parameterizations. Results for the best designs are presented and discussed. The simulation tools, ESAComp [1] and modeFRONTIER [2], employed in the optimization loop are elucidated and their value for the optimization process is explained.

  11. Summary of Cumulus Parameterization Workshop

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh

    2002-01-01

    A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.

  12. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  13. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  14. Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures

    PubMed Central

    Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.

    2016-01-01

    Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449

  15. Querying databases of trajectories of differential equations: Data structures for trajectories

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1989-01-01

    One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.

  16. Clustering Tree-structured Data on Manifold

    PubMed Central

    Lu, Na; Miao, Hongyu

    2016-01-01

    Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696

  17. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    NASA Astrophysics Data System (ADS)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.

  18. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE PAGES

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-06

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less

  19. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less

  20. Querying databases of trajectories of differential equations 2: Index functions

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Suppose that a large number of parameterized trajectories (gamma) of a dynamical system evolving in R sup N are stored in a database. Let eta is contained R sup N denote a parameterized path in Euclidean space, and let parallel to center dot parallel to denote a norm on the space of paths. A data structures and indices for trajectories are defined and algorithms are given to answer queries of the following forms: Query 1. Given a path eta, determine whether eta occurs as a subtrajectory of any trajectory gamma from the database. If so, return the trajectory; otherwise, return null. Query 2. Given a path eta, return the trajectory gamma from the database which minimizes the norm parallel to eta - gamma parallel.

  1. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    NASA Astrophysics Data System (ADS)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  2. Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations

    DOE PAGES

    Liu, Gang; Liu, Yangang; Endo, Satoshi

    2013-02-01

    Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less

  3. Active Subspaces of Airfoil Shape Parameterizations

    NASA Astrophysics Data System (ADS)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  4. Betatron motion with coupling of horizontal and vertical degrees of freedom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. A. Bogacz; V. A. Lebedev

    2002-11-21

    The Courant-Snyder parameterization of one-dimensional linear betatron motion is generalized to two-dimensional coupled linear motion. To represent the 4 x 4 symplectic transfer matrix the following ten parameters were chosen: four beta-functions, four alpha-functions and two betatron phase advances which have a meaning similar to the Courant-Snyder parameterization. Such a parameterization works equally well for weak and strong coupling and can be useful for analysis of coupled betatron motion in circular accelerators as well as in transfer lines. Similarly, the transfer matrix, the bilinear form describing the phase space ellipsoid and the second order moments are related to the eigen-vectors.more » Corresponding equations can be useful in interpreting tracking results and experimental data.« less

  5. Analysis of electromagnetic interference from power system processing and transmission components for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Barber, Peter W.; Demerdash, Nabeel A. O.; Wang, R.; Hurysz, B.; Luo, Z.

    1991-01-01

    The goal is to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom.The approach consists of four steps: (1) develop analytical tools (models and computer programs); (2) conduct parameterization studies; (3) predict the global space station EMI environment; and (4) provide a basis for modification of EMI standards.

  6. Alpha-canonical form representation of the open loop dynamics of the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Duyar, Almet; Eldem, Vasfi; Merrill, Walter C.; Guo, Ten-Huei

    1991-01-01

    A parameter and structure estimation technique for multivariable systems is used to obtain a state space representation of open loop dynamics of the space shuttle main engine in alpha-canonical form. The parameterization being used is both minimal and unique. The simplified linear model may be used for fault detection studies and control system design and development.

  7. Incommensurate crystallography without additional dimensions.

    PubMed

    Kocian, Philippe

    2013-07-01

    It is shown that the Euclidean group of translations, when treated as a Lie group, generates translations not only in Euclidean space but on any space, curved or not. Translations are then not necessarily vectors (straight lines); they can be any curve compatible with the parameterization of the considered space. In particular, attention is drawn to the fact that one and only one finite and free module of the Lie algebra of the group of translations can generate both modulated and non-modulated lattices, the modulated character being given only by the parameterization of the space in which the lattice is generated. Moreover, it is shown that the diffraction pattern of a structure is directly linked to the action of that free and finite module. In the Fourier transform of a whole structure, the Fourier transform of the electron density of one unit cell (i.e. the structure factor) appears concretely, whether the structure is modulated or not. Thus, there exists a neat separation: the geometrical aspect on the one hand and the action of the group on the other, without requiring additional dimensions.

  8. Remote Sensing of Soil Moisture: A Comparison of Optical and Thermal Methods

    NASA Astrophysics Data System (ADS)

    Foroughi, H.; Naseri, A. A.; Boroomandnasab, S.; Sadeghi, M.; Jones, S. B.; Tuller, M.; Babaeian, E.

    2017-12-01

    Recent technological advances in satellite and airborne remote sensing have provided new means for large-scale soil moisture monitoring. Traditional methods for soil moisture retrieval require thermal and optical RS observations. In this study we compared the traditional trapezoid model parameterized based on the land surface temperature - normalized difference vegetation index (LST-NDVI) space with the recently developed optical trapezoid model OPTRAM parameterized based on the shortwave infrared transformed reflectance (STR)-NDVI space for an extensive sugarcane field located in Southwestern Iran. Twelve Landsat-8 satellite images were acquired during the sugarcane growth season (April to October 2016). Reference in situ soil moisture data were obtained at 22 locations at different depths via core sampling and oven-drying. The obtained results indicate that the thermal/optical and optical prediction methods are comparable, both with volumetric moisture content estimation errors of about 0.04 cm3 cm-3. However, the OPTRAM model is more efficient because it does not require thermal data and can be universally parameterized for a specific location, because unlike the LST-soil moisture relationship, the reflectance-soil moisture relationship does not significantly vary with environmental variables (e.g., air temperature, wind speed, etc.).

  9. Parametric soil water retention models: a critical evaluation of expressions for the full moisture range

    NASA Astrophysics Data System (ADS)

    Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane

    2018-02-01

    Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.

  10. New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes

    NASA Astrophysics Data System (ADS)

    Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna

    2018-01-01

    We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.

  11. Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.

    2002-01-01

    A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.

  12. A coupled two-dimensional main chain torsional potential for protein dynamics: generation and implementation.

    PubMed

    Li, Yongxiu; Gao, Ya; Zhang, Xuqiang; Wang, Xingyu; Mou, Lirong; Duan, Lili; He, Xiao; Mei, Ye; Zhang, John Z H

    2013-09-01

    Main chain torsions of alanine dipeptide are parameterized into coupled 2-dimensional Fourier expansions based on quantum mechanical (QM) calculations at M06 2X/aug-cc-pvtz//HF/6-31G** level. Solvation effect is considered by employing polarizable continuum model. Utilization of the M06 2X functional leads to precise potential energy surface that is comparable to or even better than MP2 level, but with much less computational demand. Parameterization of the 2D expansions is against the full main chain torsion space instead of just a few low energy conformations. This procedure is similar to that for the development of AMBER03 force field, except unique weighting factor was assigned to all the grid points. To avoid inconsistency between quantum mechanical calculations and molecular modeling, the model peptide is further optimized at molecular mechanics level with main chain dihedral angles fixed before the calculation of the conformational energy on molecular mechanical level at each grid point, during which generalized Born model is employed. Difference in solvation models at quantum mechanics and molecular mechanics levels makes this parameterization procedure less straightforward. All force field parameters other than main chain torsions are taken from existing AMBER force field. With this new main chain torsion terms, we have studied the main chain dihedral distributions of ALA dipeptide and pentapeptide in aqueous solution. The results demonstrate that 2D main chain torsion is effective in delineating the energy variation associated with rotations along main chain dihedrals. This work is an implication for the necessity of more accurate description of main chain torsions in the future development of ab initio force field and it also raises a challenge to the development of quantum mechanical methods, especially the quantum mechanical solvation models.

  13. Variability in mutational fitness effects prevents full lethal transitions in large quasispecies populations

    NASA Astrophysics Data System (ADS)

    Sardanyés, Josep; Simó, Carles; Martínez, Regina; Solé, Ricard V.; Elena, Santiago F.

    2014-04-01

    The distribution of mutational fitness effects (DMFE) is crucial to the evolutionary fate of quasispecies. In this article we analyze the effect of the DMFE on the dynamics of a large quasispecies by means of a phenotypic version of the classic Eigen's model that incorporates beneficial, neutral, deleterious, and lethal mutations. By parameterizing the model with available experimental data on the DMFE of Vesicular stomatitis virus (VSV) and Tobacco etch virus (TEV), we found that increasing mutation does not totally push the entire viral quasispecies towards deleterious or lethal regions of the phenotypic sequence space. The probability of finding regions in the parameter space of the general model that results in a quasispecies only composed by lethal phenotypes is extremely small at equilibrium and in transient times. The implications of our findings can be extended to other scenarios, such as lethal mutagenesis or genomically unstable cancer, where increased mutagenesis has been suggested as a potential therapy.

  14. Impact of Improvements in Volcanic Implementation on Atmospheric Chemistry and Climate in the GISS-E2 Model

    NASA Technical Reports Server (NTRS)

    Tsigaridis, Kostas; LeGrande, Allegra; Bauer, Susanne

    2015-01-01

    The representation of volcanic eruptions in climate models introduces some of the largest errors when evaluating historical simulations, partly due to the crude model parameterizations. We will show preliminary results from the Goddard Institute for Space Studies (GISS)-E2 model comparing traditional highly parameterized volcanic implementation (specified Aerosol Optical Depth, Effective Radius) to deploying the full aerosol microphysics module MATRIX and directly emitting SO2 allowing us the prognosically determine the chemistry and climate impact. We show a reasonable match in aerosol optical depth, effective radius, and forcing between the full aerosol implementation and reconstructions/observations of the Mt. Pinatubo 1991 eruption, with a few areas as targets for future improvement. This allows us to investigate not only the climate impact of the injection of volcanic aerosols, but also influences on regional water vapor, O3, and OH distributions. With the skill of the MATRIX volcano implementation established, we explore (1) how the height of the injection column of SO2 influence atmospheric chemistry and climate response, (2) how the initial condition of the atmosphere influences the climate and chemistry impact of the eruption with a particular focus on how ENSO and QBO and (3) how the coupled chemistry could mitigate the climate signal for much larger eruptions (i.e. the 1258 eruption, reconstructed to be approximately 10x Pinatubo). During each sensitivity experiment we assess the impact on profiles of water vapor, O3, and OH, and assess how the eruption impacts the budget of each.

  15. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  16. Temperature control simulation for a microwave transmitter cooling system. [deep space network

    NASA Technical Reports Server (NTRS)

    Yung, C. S.

    1980-01-01

    The thermal performance of a temperature control system for the antenna microwave transmitter (klystron tube) of the Deep Space Network antenna tracking system is discussed. In particular the mathematical model is presented along with the details of a computer program which is written for the system simulation and the performance parameterization. Analytical expressions are presented.

  17. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  18. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  19. Analysis of electromagnetic interference from power system processing and transmission components for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Barber, Peter W.; Demerdash, Nabeel A. O.; Hurysz, B.; Luo, Z.; Denny, Hugh W.; Millard, David P.; Herkert, R.; Wang, R.

    1992-01-01

    The goal of this research project was to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom. The approach consists of four steps: (1) developing analytical tools (models and computer programs); (2) conducting parameterization (what if?) studies; (3) predicting the global space station EMI environment; and (4) providing a basis for modification of EMI standards.

  20. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  1. Pion and Kaon Lab Frame Differential Cross Sections for Intermediate Energy Nucleus-Nucleus Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.

    2008-01-01

    Space radiation transport codes require accurate models for hadron production in intermediate energy nucleus-nucleus collisions. Codes require cross sections to be written in terms of lab frame variables and it is important to be able to verify models against experimental data in the lab frame. Several models are compared to lab frame data. It is found that models based on algebraic parameterizations are unable to describe intermediate energy differential cross section data. However, simple thermal model parameterizations, when appropriately transformed from the center of momentum to the lab frame, are able to account for the data.

  2. Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; hide

    2017-01-01

    This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.

  3. The terminal area simulation system. Volume 1: Theoretical formulation

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    A three-dimensional numerical cloud model was developed for the general purpose of studying convective phenomena. The model utilizes a time splitting integration procedure in the numerical solution of the compressible nonhydrostatic primitive equations. Turbulence closure is achieved by a conventional first-order diagnostic approximation. Open lateral boundaries are incorporated which minimize wave reflection and which do not induce domain-wide mass trends. Microphysical processes are governed by prognostic equations for potential temperature water vapor, cloud droplets, ice crystals, rain, snow, and hail. Microphysical interactions are computed by numerous Orville-type parameterizations. A diagnostic surface boundary layer is parameterized assuming Monin-Obukhov similarity theory. The governing equation set is approximated on a staggered three-dimensional grid with quadratic-conservative central space differencing. Time differencing is approximated by the second-order Adams-Bashforth method. The vertical grid spacing may be either linear or stretched. The model domain may translate along with a convective cell, even at variable speeds.

  4. The QBO in Two GISS Global Climate Models: 1. Generation of the QBO

    NASA Technical Reports Server (NTRS)

    Rind, David; Jonas, Jeffrey A.; Balachandra, Nambath; Schmidt, Gavin A.; Lean, Judith

    2014-01-01

    The adjustment of parameterized gravity waves associated with model convection and finer vertical resolution has made possible the generation of the quasi-biennial oscillation (QBO) in two Goddard Institute for Space Studies (GISS) models, GISS Middle Atmosphere Global Climate Model III and a climate/middle atmosphere version of Model E2. Both extend from the surface to 0.002 hPa, with 2deg × 2.5deg resolution and 102 layers. Many realistic features of the QBO are simulated, including magnitude and variability of its period and amplitude. The period itself is affected by the magnitude of parameterized convective gravity wave momentum fluxes and interactive ozone (which also affects the QBO amplitude and variability), among other forcings. Although varying sea surface temperatures affect the parameterized momentum fluxes, neither aspect is responsible for the modeled variation in QBO period. Both the parameterized and resolved waves act to produce the respective easterly and westerly wind descent, although their effect is offset in altitude at each level. The modeled and observed QBO influences on tracers in the stratosphere, such as ozone, methane, and water vapor are also discussed. Due to the link between the gravity wave parameterization and the models' convection, and the dependence on the ozone field, the models may also be used to investigate how the QBO may vary with climate change.

  5. Quantum mechanics on space with SU(2) fuzziness

    NASA Astrophysics Data System (ADS)

    Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad

    2009-04-01

    Quantum mechanics of models is considered which are constructed in spaces with Lie algebra type commutation relations between spatial coordinates. The case is specialized to that of the group SU(2), for which the formulation of the problem via the Euler parameterization is also presented. SU(2)-invariant systems are discussed, and the corresponding eigenvalue problem for the Hamiltonian is reduced to an ordinary differential equation, as is the case with such models on commutative spaces.

  6. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  7. Non-perturbational surface-wave inversion: A Dix-type relation for surface waves

    USGS Publications Warehouse

    Haney, Matt; Tsai, Victor C.

    2015-01-01

    We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.

  8. Classical dynamics on curved Snyder space

    NASA Astrophysics Data System (ADS)

    Ivetić, B.; Meljanac, S.; Mignemi, S.

    2014-05-01

    We study the classical dynamics of a particle in nonrelativistic Snyder-de Sitter space. We show that for spherically symmetric systems, parameterizing the solutions in terms of an auxiliary time variable, which is a function only of the physical time and of the energy and angular momentum of the particles, one can reduce the problem to the equivalent one in classical mechanics. We also discuss a relativistic extension of these results, and a generalization to the case in which the algebra is realized in flat space.

  9. Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate

    NASA Technical Reports Server (NTRS)

    Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.

    2001-01-01

    The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.

  10. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    NASA Astrophysics Data System (ADS)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  12. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    NASA Astrophysics Data System (ADS)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  13. Gravity Waves Generated by Convection: A New Idealized Model Tool and Direct Validation with Satellite Observations

    NASA Astrophysics Data System (ADS)

    Alexander, M. Joan; Stephan, Claudia

    2015-04-01

    In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.

  14. Investigation of nuclear structure of 30-44S isotopes using spherical and deformed Skyrme-Hartree-Fock method

    NASA Astrophysics Data System (ADS)

    Alzubadi, A. A.

    2015-06-01

    Nuclear many-body system is usually described by a mean-field built upon a nucleon-nucleon effective interaction. In this work, we investigate ground state properties of the sulfur isotopes covering a wide range from the line of stability up to the dripline region (30-44S). For this purpose the Hartree-Fock mean field theory in coordinate space with a Skyrme parameterization SkM* has been utilized. In particular, we calculate the nuclear charge, neutrons, protons, mass densities, the associated radii, neutron skin thickness and binding energy. The charge form factors have been also investigated using SkM*, SkO, SkE, SLy4 and Skxs15 Skyrme parameterizations and the results obtained using the theoretical approach are compared with the available experimental data. To investigate the potential energy surface as a function of the quadrupole deformation for isotopic sulfur chains, Skyrme-Hartree-Fock-Bogoliubov theory has been adopted with SLy4 parameterization.

  15. Payload design requirements analysis (study 2.2). Volume 3. Guideline analysis. [economic analysis of payloads for space shuttles and space tugs

    NASA Technical Reports Server (NTRS)

    Shiokari, T.

    1973-01-01

    Payloads to be launched on the space shuttle/space tug/sortie lab combinations are discussed. The payloads are of four types: (1) expendable, (2) ground refurbishable, (3) on-orbit maintainable, and (4) sortie. Economic comparisons are limited to the four types of payloads described. Additional system guidelines were developed by analyzing two payloads parameterically and demonstrating the results on an example satellite. In addition to analyzing the selected guidelines, emphasis was placed on providing economic tradeoff data and identifying payload parameters influencing the low cost approaches.

  16. Towards a more efficient and robust representation of subsurface hydrological processes in Earth System Models

    NASA Astrophysics Data System (ADS)

    Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.

    2017-12-01

    Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.

  17. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  18. On the Relationship between Observed NLDN Lightning ...

    EPA Pesticide Factsheets

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs

  19. Standardizing Navigation Data: A Status Update

    NASA Technical Reports Server (NTRS)

    VanEepoel, John M.; Berry, David S.; Pallaschke, Siegmar; Foliard, Jacques; Kiehling, Reinhard; Ogawa, Mina; Showell, Avanaugh; Fertig, Juergen; Castronuovo, Marco

    2007-01-01

    This paper presents the work of the Navigation Working Group of the Consultative Committee for Space Data Systems (CCSDS) on development of standards addressing the transfer of orbit, attitude and tracking data for space objects. Much progress has been made since the initial presentation of the standards in 2004, including the progression of the orbit data standard to an accepted standard, and the near completion of the attitude and tracking data standards. The orbit, attitude and tracking standards attempt to address predominant parameterizations for their respective data, and create a message format that enables communication of the data across space agencies and other entities. The messages detailed in each standard are built upon a keyword = value paradigm, where a fixed list of keywords is provided in the standard where users specify information about their data, and also use keywords to encapsulate their data. The paper presents a primer on the CCSDS standardization process to put in context the state of the message standards, and the parameterizations supported in each standard, then shows examples of these standards for orbit, attitude and tracking data. Finalization of the standards is expected by the end of calendar year 2007.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.

    Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less

  1. Probability of satellite collision

    NASA Technical Reports Server (NTRS)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  2. Pion Total Cross Section in Nucleon - Nucleon Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.

    2009-01-01

    Total cross section parameterizations for neutral and charged pion production in nucleon - nucleon collisions are compared to experimental data over the projectile momentum range from threshold to 300 GeV. Both proton - proton and proton - neutron reactions are considered. Overall excellent agreement between parameterizations and experiment is found, except for notable disagreements near threshold. In addition, the hypothesis that the neutral pion production cross section can be obtained from the average charged pion cross section is checked. The theoretical formulas presented in the paper obey this hypothesis for projectile momenta below 500 GeV. The results presented provide a test of engineering tools used to calculate the pion component of space radiation.

  3. A Comparison of Perturbed Initial Conditions and Multiphysics Ensembles in a Severe Weather Episode in Spain

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain

    2012-01-01

    Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.

  4. Data Automata in Scala

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    The field of runtime verification has during the last decade seen a multitude of systems for monitoring event sequences (traces) emitted by a running system. The objective is to ensure correctness of a system by checking its execution traces against formal specifications representing requirements. A special challenge is data parameterized events, where monitors have to keep track of the combination of control states as well as data constraints, relating events and the data they carry across time points. This poses a challenge wrt. efficiency of monitors, as well as expressiveness of logics. Data automata is a form of automata where states are parameterized with data, supporting monitoring of data parameterized events. We describe the full details of a very simple API in the Scala programming language, an internal DSL (Domain-Specific Language), implementing data automata. The small implementation suggests a design pattern. Data automata allow transition conditions to refer to other states than the source state, and allow target states of transitions to be inlined, offering a temporal logic flavored notation. An embedding of a logic in a high-level language like Scala in addition allows monitors to be programmed using all of Scala's language constructs, offering the full flexibility of a programming language. The framework is demonstrated on an XML processing scenario previously addressed in related work.

  5. Universal approximators for multi-objective direct policy search in water reservoir management problems: a comparative analysis

    NASA Astrophysics Data System (ADS)

    Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca

    2014-05-01

    The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Preliminary results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while most of the ANN solutions results to be Pareto-dominated by the RBF ones.

  6. The sensitivity of Alpine summer convection to surrogate climate change: an intercomparison between convection-parameterizing and convection-resolving models

    NASA Astrophysics Data System (ADS)

    Keller, Michael; Kröner, Nico; Fuhrer, Oliver; Lüthi, Daniel; Schmidli, Juerg; Stengel, Martin; Stöckli, Reto; Schär, Christoph

    2018-04-01

    Climate models project an increase in heavy precipitation events in response to greenhouse gas forcing. Important elements of such events are rain showers and thunderstorms, which are poorly represented in models with parameterized convection. In this study, simulations with 12 km horizontal grid spacing (convection-parameterizing model, CPM) and 2 km grid spacing (convection-resolving model, CRM) are employed to investigate the change in the diurnal cycle of convection with warmer climate. For this purpose, simulations of 11 days in June 2007 with a pronounced diurnal cycle of convection are compared with surrogate simulations from the same period. The surrogate climate simulations mimic a future climate with increased temperatures but unchanged relative humidity and similar synoptic-scale circulation. Two temperature scenarios are compared: one with homogeneous warming (HW) using a vertically uniform warming and the other with vertically dependent warming (VW) that enables changes in lapse rate. The two sets of simulations with parameterized and explicit convection exhibit substantial differences, some of which are well known from the literature. These include differences in the timing and amplitude of the diurnal cycle of convection, and the frequency of precipitation with low intensities. The response to climate change is much less studied. We can show that stratification changes have a strong influence on the changes in convection. Precipitation is strongly increasing for HW but decreasing for the VW simulations. For cloud type frequencies, virtually no changes are found for HW, but a substantial reduction in high clouds is found for VW. Further, we can show that the climate change signal strongly depends upon the horizontal resolution. In particular, significant differences between CPM and CRM are found in terms of the radiative feedbacks, with CRM exhibiting a stronger negative feedback in the top-of-the-atmosphere energy budget.

  7. Dependence of radiation belt simulations to assumed radial diffusion rates tested for two empirical models of radial transport

    NASA Astrophysics Data System (ADS)

    Drozdov, Alexander; Shprits, Yuri; Aseev, Nikita; Kellerman, Adam; Reeves, Geoffrey

    2017-04-01

    Radial diffusion is one of the dominant physical mechanisms that drives acceleration and loss of the radiation belt electrons, which makes it very important for nowcasting and forecasting space weather models. We investigate the sensitivity of the two parameterizations of the radial diffusion of Brautigam and Albert [2000] and Ozeke et al. [2014] on long-term radiation belt modeling using the Versatile Electron Radiation Belt (VERB). Following Brautigam and Albert [2000] and Ozeke et al. [2014], we first perform 1-D radial diffusion simulations. Comparison of the simulation results with observations shows that the difference between simulations with either radial diffusion parameterization is small. To take into account effects of local acceleration and loss, we perform 3-D simulations, including pitch-angle, energy and mixed diffusion. We found that the results of 3-D simulations are even less sensitive to the choice of parameterization of radial diffusion rates than the results of 1-D simulations at various energies (from 0.59 to 1.80 MeV). This result demonstrates that the inclusion of local acceleration and pitch-angle diffusion can provide a negative feedback effect, such that the result is largely indistinguishable simulations conducted with different radial diffusion parameterizations. We also perform a number of sensitivity tests by multiplying radial diffusion rates by constant factors and show that such an approach leads to unrealistic predictions of radiation belt dynamics. References Brautigam, D. H., and J. M. Albert (2000), Radial diffusion analysis of outer radiation belt electrons during the October 9, 1990, magnetic storm, J. Geophys. Res., 105(A1), 291-309, doi:10.1029/1999ja900344. Ozeke, L. G., I. R. Mann, K. R. Murphy, I. Jonathan Rae, and D. K. Milling (2014), Analytic expressions for ULF wave radiation belt radial diffusion coefficients, J. Geophys. Res. [Space Phys.], 119(3), 1587-1605, doi:10.1002/2013JA019204.

  8. Coarse graining of entanglement classes in 2 ×m ×n systems

    NASA Astrophysics Data System (ADS)

    Hebenstreit, M.; Gachechiladze, M.; Gühne, O.; Kraus, B.

    2018-03-01

    We consider three-partite pure states in the Hilbert space C2⊗Cm⊗Cn and investigate to which states a given state can be locally transformed with a nonvanishing probability. Whenever the initial and final states are elements of the same Hilbert space, the problem can be solved via the characterization of the entanglement classes which are determined via stochastic local operations and classical communication (SLOCC). In the particular case considered here, the matrix pencil theory can be utilized to address this point. In general, there are infinitely many SLOCC classes. However, when considering transformations from higher to lower dimensional Hilbert spaces, an additional hierarchy among the classes can be found. This hierarchy of SLOCC classes coarse grains SLOCC classes which can be reached from a common resource state of higher dimension. We first show that a generic set of states in C2⊗Cm⊗Cn for n =m is the union of infinitely many SLOCC classes, which can be parameterized by m -3 parameters. However, for n ≠m there exists a single SLOCC class which is generic. Using this result, we then show that there is a full-measure set of states in C2⊗Cm⊗Cn such that any state within this set can be transformed locally to a full measure set of states in any lower dimensional Hilbert space. We also investigate resource states, which can be transformed to any state (not excluding any zero-measure set) in the smaller dimensional Hilbert space. We explicitly derive a state in C2⊗Cm⊗C2 m -2 which is the optimal common resource of all states in C2⊗Cm⊗Cm . We also show that for any n <2 m it is impossible to reach all states in C2⊗Cm⊗Cn ˜ whenever n ˜>m .

  9. New Approaches to Parameterizing Convection

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Lappen, Cara-Lyn

    1999-01-01

    Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.

  10. Betatron motion with coupling of horizontal and vertical degrees of freedom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebedev, V.A.; /Fermilab; Bogacz, S.A.

    Presently, there are two most frequently used parameterizations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is described by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. Considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev's vertex-to-plane adapter.« less

  11. Matrix Transfer Function Design for Flexible Structures: An Application

    NASA Technical Reports Server (NTRS)

    Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.

    1985-01-01

    The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.

  12. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  13. Testing general relativity in space-borne and astronomical laboratories

    NASA Technical Reports Server (NTRS)

    Will, Clifford M.

    1989-01-01

    The current status of space-based experiments and astronomical observations designed to test the theory of general relativity is surveyed. Consideration is given to tests of post-Newtonian gravity, searches for feeble short-range forces and gravitomagnetism, improved measurements of parameterized post-Newtonian parameter values, explorations of post-Newtonian physics, tests of the Einstein equivalence principle, observational tests of post-Newtonian orbital effects, and efforts to detect quadrupole and dipole radiation damping. Recent numerical results are presented in tables.

  14. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  15. Almost but not quite 2D, Non-linear Bayesian Inversion of CSEM Data

    NASA Astrophysics Data System (ADS)

    Ray, A.; Key, K.; Bodin, T.

    2013-12-01

    The geophysical inverse problem can be elegantly stated in a Bayesian framework where a probability distribution can be viewed as a statement of information regarding a random variable. After all, the goal of geophysical inversion is to provide information on the random variables of interest - physical properties of the earth's subsurface. However, though it may be simple to postulate, a practical difficulty of fully non-linear Bayesian inversion is the computer time required to adequately sample the model space and extract the information we seek. As a consequence, in geophysical problems where evaluation of a full 2D/3D forward model is computationally expensive, such as marine controlled source electromagnetic (CSEM) mapping of the resistivity of seafloor oil and gas reservoirs, Bayesian studies have largely been conducted with 1D forward models. While the 1D approximation is indeed appropriate for exploration targets with planar geometry and geological stratification, it only provides a limited, site-specific idea of uncertainty in resistivity with depth. In this work, we extend our fully non-linear 1D Bayesian inversion to a 2D model framework, without requiring the usual regularization of model resistivities in the horizontal or vertical directions used to stabilize quasi-2D inversions. In our approach, we use the reversible jump Markov-chain Monte-Carlo (RJ-MCMC) or trans-dimensional method and parameterize the subsurface in a 2D plane with Voronoi cells. The method is trans-dimensional in that the number of cells required to parameterize the subsurface is variable, and the cells dynamically move around and multiply or combine as demanded by the data being inverted. This approach allows us to expand our uncertainty analysis of resistivity at depth to more than a single site location, allowing for interactions between model resistivities at different horizontal locations along a traverse over an exploration target. While the model is parameterized in 2D, we efficiently evaluate the forward response using 1D profiles extracted from the model at the common-midpoints of the EM source-receiver pairs. Since the 1D approximation is locally valid at different midpoint locations, the computation time is far lower than is required by a full 2D or 3D simulation. We have applied this method to both synthetic and real CSEM survey data from the Scarborough gas field on the Northwest shelf of Australia, resulting in a spatially variable quantification of resistivity and its uncertainty in 2D. This Bayesian approach results in a large database of 2D models that comprise a posterior probability distribution, which we can subset to test various hypotheses about the range of model structures compatible with the data. For example, we can subset the model distributions to examine the hypothesis that a resistive reservoir extends overs a certain spatial extent. Depending on how this conditions other parts of the model space, light can be shed on the geological viability of the hypothesis. Since tackling spatially variable uncertainty and trade-offs in 2D and 3D is a challenging research problem, the insights gained from this work may prove valuable for subsequent full 2D and 3D Bayesian inversions.

  16. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    NASA Astrophysics Data System (ADS)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.

  17. xspec_emcee: XSPEC-friendly interface for the emcee package

    NASA Astrophysics Data System (ADS)

    Sanders, Jeremy

    2018-05-01

    XSPEC_EMCEE is an XSPEC-friendly interface for emcee (ascl:1303.002). It carries out MCMC analyses of X-ray spectra in the X-ray spectral fitting program XSPEC (ascl:9910.005). It can run multiple xspec processes simultaneously, speeding up the analysis, and can switch to parameterizing norm parameters in log space.

  18. Impacts of differing aerodynamic resistance formulae on modeled energy exchange at the above-canopy/within-canopy/soil interface

    USDA-ARS?s Scientific Manuscript database

    Application of the Two-Source Energy Balance (TSEB) Model using land surface temperature (LST) requires aerodynamic resistance parameterizations for the flux exchange above the canopy layer, within the canopy air space and at the soil/substrate surface. There are a number of aerodynamic resistance f...

  19. Camera-pose estimation via projective Newton optimization on the manifold.

    PubMed

    Sarkis, Michel; Diepold, Klaus

    2012-04-01

    Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.

  20. On the Use and Validation of Mosaic Heterogeneity in Atmospheric Numerical Models

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The mosaic land modeling approach allows for the representation of multiple surface types in a single atmospheric general circulation model grid box. Each surface type, collectively called 'tiles' correspond to different sets of surface characteristics (e.g. for grass, crop or forest). Typically, the tile space data is averaged to grid space by weighting the tiles with their fractional cover. While grid space data is routinely evaluated, little attention has been given to the tile space data. The present paper explores uses of the tile space surface data in validation with station observations. The results indicate the limitations that the mosaic heterogeneity parameterization has in reproducing variations observed between stations at the Atmospheric Radiation Measurement Southern Great Plains field site.

  1. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  2. Increasing Diversity in Global Climate Change, Space Weather and Space Technology Research and Education

    NASA Astrophysics Data System (ADS)

    Johnson, L. P.; Austin, S. A.; Howard, A. M.; Boxe, C.; Jiang, M.; Tulsee, T.; Chow, Y. W.; Zavala-Gutierrez, R.; Barley, R.; Filin, B.; Brathwaite, K.

    2015-12-01

    This presentation describes projects at Medgar Evers College of the City University of New York that contribute to the preparation of a diverse workforce in the areas of ocean modeling, planetary atmospheres, space weather and space technology. Specific projects incorporating both undergraduate and high school students include Assessing Parameterizations of Energy Input to Internal Ocean Mixing, Reaction Rate Uncertainty on Mars Atmospheric Ozone, Remote Sensing of Solar Active Regions and Intelligent Software for Nano-satellites. These projects are accompanied by a newly developed Computational Earth and Space Science course to provide additional background on methodologies and tools for scientific data analysis. This program is supported by NSF award AGS-1359293 REU Site: CUNY/GISS Center for Global Climate Research and the NASA New York State Space Grant Consortium.

  3. Cold Season QPF: Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations

    NASA Technical Reports Server (NTRS)

    Molthan, A. L.; Haynes, J. A.; Jedlovec, G. L.; Lapenta, W. M.

    2009-01-01

    As operational numerical weather prediction is performed at increasingly finer spatial resolution, precipitation traditionally represented by sub-grid scale parameterization schemes is now being calculated explicitly through the use of single- or multi-moment, bulk water microphysics schemes. As computational resources grow, the real-time application of these schemes is becoming available to a broader audience, ranging from national meteorological centers to their component forecast offices. A need for improved quantitative precipitation forecasts has been highlighted by the United States Weather Research Program, which advised that gains in forecasting skill will draw upon improved simulations of clouds and cloud microphysical processes. Investments in space-borne remote sensing have produced the NASA A-Train of polar orbiting satellites, specially equipped to observe and catalog cloud properties. The NASA CloudSat instrument, a recent addition to the A-Train and the first 94 GHz radar system operated in space, provides a unique opportunity to compare observed cloud profiles to their modeled counterparts. Comparisons are available through the use of a radiative transfer model (QuickBeam), which simulates 94 GHz radar returns based on the microphysics of cloudy model profiles and the prescribed characteristics of their constituent hydrometeor classes. CloudSat observations of snowfall are presented for a case in the central United States, with comparisons made to precipitating clouds as simulated by the Weather Research and Forecasting Model and the Goddard single-moment microphysics scheme. An additional forecast cycle is performed with a temperature-based parameterization of the snow distribution slope parameter, with comparisons to CloudSat observations provided through the QuickBeam simulator.

  4. A Universal Ts-VI Triangle Method for the Continuous Retrieval of Evaporative Fraction From MODIS Products

    NASA Astrophysics Data System (ADS)

    Zhu, Wenbin; Jia, Shaofeng; Lv, Aifeng

    2017-10-01

    The triangle method based on the spatial relationship between remotely sensed land surface temperature (Ts) and vegetation index (VI) has been widely used for the estimates of evaporative fraction (EF). In the present study, a universal triangle method was proposed by transforming the Ts-VI feature space from a regional scale to a pixel scale. The retrieval of EF is only related to the boundary conditions at pixel scale, regardless of the Ts-VI configuration over the spatial domain. The boundary conditions of each pixel are composed of the theoretical dry edge determined by the surface energy balance principle and the wet edge determined by the average air temperature of open water. The universal triangle method was validated using the EF observations collected by the Energy Balance Bowen Ratio systems in the Southern Great Plains of the United States of America (USA). Two parameterization schemes of EF were used to demonstrate their applicability with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) products over the whole year 2004. The results of this study show that the accuracy produced by both of these two parameterization schemes is comparable to that produced by the traditional triangle method, although the universal triangle method seems specifically suited to the parameterization scheme proposed in our previous research. The independence of the universal triangle method from the Ts-VI feature space makes it possible to conduct a continuous monitoring of evapotranspiration and soil moisture. That is just the ability the traditional triangle method does not possess.

  5. Turbulence-driven Coronal Heating and Improvements to Empirical Forecasting of the Solar Wind

    NASA Astrophysics Data System (ADS)

    Woolsey, Lauren N.; Cranmer, Steven R.

    2014-06-01

    Forecasting models of the solar wind often rely on simple parameterizations of the magnetic field that ignore the effects of the full magnetic field geometry. In this paper, we present the results of two solar wind prediction models that consider the full magnetic field profile and include the effects of Alfvén waves on coronal heating and wind acceleration. The one-dimensional magnetohydrodynamic code ZEPHYR self-consistently finds solar wind solutions without the need for empirical heating functions. Another one-dimensional code, introduced in this paper (The Efficient Modified-Parker-Equation-Solving Tool, TEMPEST), can act as a smaller, stand-alone code for use in forecasting pipelines. TEMPEST is written in Python and will become a publicly available library of functions that is easy to adapt and expand. We discuss important relations between the magnetic field profile and properties of the solar wind that can be used to independently validate prediction models. ZEPHYR provides the foundation and calibration for TEMPEST, and ultimately we will use these models to predict observations and explain space weather created by the bulk solar wind. We are able to reproduce with both models the general anticorrelation seen in comparisons of observed wind speed at 1 AU and the flux tube expansion factor. There is significantly less spread than comparing the results of the two models than between ZEPHYR and a traditional flux tube expansion relation. We suggest that the new code, TEMPEST, will become a valuable tool in the forecasting of space weather.

  6. Impact of aerodynamic resistance formulations used in two-source modeling of energy exchange from the soil and vegetation using land surface temperature

    USDA-ARS?s Scientific Manuscript database

    Application of the Two-Source Energy Balance (TSEB) Model using land surface temperature (LST) requires aerodynamic resistance parameterizations for the flux exchange above the canopy layer, within the canopy air space and at the soil/substrate surface. There are a number of aerodynamic resistance f...

  7. Parameterizing ecosystem light use efficiency and water use efficiency to estimate maize gross primary production and evapotranspiration using MODIS EVI

    USDA-ARS?s Scientific Manuscript database

    Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...

  8. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  9. Radiation: Physical Characterization and Environmental Measurements

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In this session, Session WP4, the discussion focuses on the following topics: Production of Neutrons from Interactions of GCR-Like Particles; Solar Particle Event Dose Distributions, Parameterization of Dose-Time Profiles; Assessment of Nuclear Events in the Body Produced by Neutrons and High-Energy Charged Particles; Ground-Based Simulations of Cosmic Ray Heavy Ion Interactions in Spacecraft and Planetary Habitat Shielding Materials; Radiation Measurements in Space Missions; Radiation Measurements in Civil Aircraft; Analysis of the Pre-Flight and Post-Flight Calibration Procedures Performed on the Liulin Space Radiation Dosimeter; and Radiation Environment Monitoring for Astronauts.

  10. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of this work will be a cross-flow turbine actuator line model to be used as an extension to the OpenFOAM computational fluid dynamics (CFD) software framework, which will likely require modifications to commonly-used dynamic stall models, in consideration of the turbines' high angle of attack excursions during normal operation.

  11. A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola

    2018-04-01

    This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.

  12. An empirical test of a diffusion model: predicting clouded apollo movements in a novel environment.

    PubMed

    Ovaskainen, Otso; Luoto, Miska; Ikonen, Iiro; Rekola, Hanna; Meyke, Evgeniy; Kuussaari, Mikko

    2008-05-01

    Functional connectivity is a fundamental concept in conservation biology because it sets the level of migration and gene flow among local populations. However, functional connectivity is difficult to measure, largely because it is hard to acquire and analyze movement data from heterogeneous landscapes. Here we apply a Bayesian state-space framework to parameterize a diffusion-based movement model using capture-recapture data on the endangered clouded apollo butterfly. We test whether the model is able to disentangle the inherent movement behavior of the species from landscape structure and sampling artifacts, which is a necessity if the model is to be used to examine how movements depend on landscape structure. We show that this is the case by demonstrating that the model, parameterized with data from a reference landscape, correctly predicts movements in a structurally different landscape. In particular, the model helps to explain why a movement corridor that was constructed as a management measure failed to increase movement among local populations. We illustrate how the parameterized model can be used to derive biologically relevant measures of functional connectivity, thus linking movement data with models of spatial population dynamics.

  13. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    USGS Publications Warehouse

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  14. Dependence of stratocumulus-topped boundary-layer entrainment on cloud-water sedimentation: Impact on global aerosol indirect effect in GISS ModelE3 single column model and global simulations

    NASA Astrophysics Data System (ADS)

    Ackerman, A. S.; Kelley, M.; Cheng, Y.; Fridlind, A. M.; Del Genio, A. D.; Bauer, S.

    2017-12-01

    Reduction in cloud-water sedimentation induced by increasing droplet concentrations has been shown in large-eddy simulations (LES) and direct numerical simulation (DNS) to enhance boundary-layer entrainment, thereby reducing cloud liquid water path and offsetting the Twomey effect when the overlying air is sufficiently dry, which is typical. Among recent upgrades to ModelE3, the latest version of the NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), are a two-moment stratiform cloud microphysics treatment with prognostic precipitation and a moist turbulence scheme that includes an option in its entrainment closure of a simple parameterization for the effect of cloud-water sedimentation. Single column model (SCM) simulations are compared to LES results for a stratocumulus case study and show that invoking the sedimentation-entrainment parameterization option indeed reduces the dependence of cloud liquid water path on increasing aerosol concentrations. Impacts of variations of the SCM configuration and the sedimentation-entrainment parameterization will be explored. Its impact on global aerosol indirect forcing in the framework of idealized atmospheric GCM simulations will also be assessed.

  15. Application of a simple power law for transport ratio with bimodal distributions of spherical grains under oscillatory forcing

    NASA Astrophysics Data System (ADS)

    Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph

    2012-11-01

    Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.

  16. Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma

    PubMed Central

    Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan

    2014-01-01

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470

  17. New Directions: Understanding Interactions of Air Quality and Climate Change at Regional Scales

    EPA Science Inventory

    The estimates of the short-lived climate forcers’ (SLCFs) impacts and mitigation effects on the radiation balance have large uncertainty because the current global model set-ups and simulations contain simplified parameterizations and do not completely cover the full range of air...

  18. QCD equation of state at nonzero chemical potential: continuum results with physical quark masses at order μ 2

    NASA Astrophysics Data System (ADS)

    Borsányi, Sz.; Endrődi, G.; Fodor, Z.; Katz, S. D.; Krieg, S.; Ratti, C.; Szabó, K. K.

    2012-08-01

    We determine the equation of state of QCD for nonzero chemical potentials via a Taylor expansion of the pressure. The results are obtained for N f = 2 + 1 flavors of quarks with physical masses, on various lattice spacings. We present results for the pressure, interaction measure, energy density, entropy density, and the speed of sound for small chemical potentials. At low temperatures we compare our results with the Hadron Resonance Gas model. We also express our observables along trajectories of constant entropy over particle number. A simple parameterization is given (the Matlab/Octave script parameterization.m, submitted to the arXiv along with the paper), which can be used to reconstruct the observables as functions of T and μ, or as functions of T and S/N.

  19. Near-global climate simulation at 1 km resolution: establishing a performance baseline on 4888 GPUs with COSMO 5.0

    NASA Astrophysics Data System (ADS)

    Fuhrer, Oliver; Chadha, Tarun; Hoefler, Torsten; Kwasniewski, Grzegorz; Lapillonne, Xavier; Leutwyler, David; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph; Schulthess, Thomas C.; Vogt, Hannes

    2018-05-01

    The best hope for reducing long-standing global climate model biases is by increasing resolution to the kilometer scale. Here we present results from an ultrahigh-resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs (graphics processing units). The dynamical core of the model has been completely rewritten using a domain-specific language (DSL) for performance portability across different hardware architectures. Physical parameterizations and diagnostics have been ported using compiler directives. To our knowledge this represents the first complete atmospheric model being run entirely on accelerators on this scale. At a grid spacing of 930 m (1.9 km), we achieve a simulation throughput of 0.043 (0.23) simulated years per day and an energy consumption of 596 MWh per simulated year. Furthermore, we propose a new memory usage efficiency (MUE) metric that considers how efficiently the memory bandwidth - the dominant bottleneck of climate codes - is being used.

  20. Quantum self-gravitating collapsing matter in a quantum geometry

    NASA Astrophysics Data System (ADS)

    Campiglia, Miguel; Gambini, Rodolfo; Olmedo, Javier; Pullin, Jorge

    2016-09-01

    The problem of how space-time responds to gravitating quantum matter in full quantum gravity has been one of the main questions that any program of quantization of gravity should address. Here we analyze this issue by considering the quantization of a collapsing null shell coupled to spherically symmetric loop quantum gravity. We show that the constraint algebra of canonical gravity is Abelian both classically and when quantized using loop quantum gravity techniques. The Hamiltonian constraint is well defined and suitable Dirac observables characterizing the problem were identified at the quantum level. We can write the metric as a parameterized Dirac observable at the quantum level and study the physics of the collapsing shell and black hole formation. We show how the singularity inside the black hole is eliminated by loop quantum gravity and how the shell can traverse it. The construction is compatible with a scenario in which the shell tunnels into a baby universe inside the black hole or one in which it could emerge through a white hole.

  1. Simulating North American mesoscale convective systems with a convection-permitting climate model

    NASA Astrophysics Data System (ADS)

    Prein, Andreas F.; Liu, Changhai; Ikeda, Kyoko; Bullock, Randy; Rasmussen, Roy M.; Holland, Greg J.; Clark, Martyn

    2017-10-01

    Deep convection is a key process in the climate system and the main source of precipitation in the tropics, subtropics, and mid-latitudes during summer. Furthermore, it is related to high impact weather causing floods, hail, tornadoes, landslides, and other hazards. State-of-the-art climate models have to parameterize deep convection due to their coarse grid spacing. These parameterizations are a major source of uncertainty and long-standing model biases. We present a North American scale convection-permitting climate simulation that is able to explicitly simulate deep convection due to its 4-km grid spacing. We apply a feature-tracking algorithm to detect hourly precipitation from Mesoscale Convective Systems (MCSs) in the model and compare it with radar-based precipitation estimates east of the US Continental Divide. The simulation is able to capture the main characteristics of the observed MCSs such as their size, precipitation rate, propagation speed, and lifetime within observational uncertainties. In particular, the model is able to produce realistically propagating MCSs, which was a long-standing challenge in climate modeling. However, the MCS frequency is significantly underestimated in the central US during late summer. We discuss the origin of this frequency biases and suggest strategies for model improvements.

  2. A protocol for parameterization and calibration of RZWQM2 in field research

    USDA-ARS?s Scientific Manuscript database

    Use of agricultural system models in field research requires a full understanding of both the model and the system it simulates. Since the 1960s, agricultural system models have increased tremendously in their complexity due to greater understanding of the processes simulated, their application to r...

  3. Hydrothermal germination models: Improving experimental efficiency by limiting data collection to the relevant hydrothermal range

    USDA-ARS?s Scientific Manuscript database

    Hydrothermal models used to predict germination response in the field are usually parameterized with data from laboratory experiments that examine the full range of germination response to temperature and water potential. Inclusion of low water potential and high and low-temperature treatments, how...

  4. Trends and uncertainties in budburst projections of Norway spruce in Northern Europe.

    PubMed

    Olsson, Cecilia; Olin, Stefan; Lindström, Johan; Jönsson, Anna Maria

    2017-12-01

    Budburst is regulated by temperature conditions, and a warming climate is associated with earlier budburst. A range of phenology models has been developed to assess climate change effects, and they tend to produce different results. This is mainly caused by different model representations of tree physiology processes, selection of observational data for model parameterization, and selection of climate model data to generate future projections. In this study, we applied (i) Bayesian inference to estimate model parameter values to address uncertainties associated with selection of observational data, (ii) selection of climate model data representative of a larger dataset, and (iii) ensembles modeling over multiple initial conditions, model classes, model parameterizations, and boundary conditions to generate future projections and uncertainty estimates. The ensemble projection indicated that the budburst of Norway spruce in northern Europe will on average take place 10.2 ± 3.7 days earlier in 2051-2080 than in 1971-2000, given climate conditions corresponding to RCP 8.5. Three provenances were assessed separately (one early and two late), and the projections indicated that the relationship among provenance will remain also in a warmer climate. Structurally complex models were more likely to fail predicting budburst for some combinations of site and year than simple models. However, they contributed to the overall picture of current understanding of climate impacts on tree phenology by capturing additional aspects of temperature response, for example, chilling. Model parameterizations based on single sites were more likely to result in model failure than parameterizations based on multiple sites, highlighting that the model parameterization is sensitive to initial conditions and may not perform well under other climate conditions, whether the change is due to a shift in space or over time. By addressing a range of uncertainties, this study showed that ensemble modeling provides a more robust impact assessment than would a single phenology model run.

  5. Inertial-Electrostatic Confinement (IEC) Fusion for Space Propulsion

    NASA Technical Reports Server (NTRS)

    Nadler, Jon

    1999-01-01

    An Inertial-Electrostatic Confinement (IEC) device was assembled at the Marshall Space Flight Center (MSFC) Propulsion Research Center (PRC) to study the possibility of using EEC technology for deep space propulsion and power. Inertial-Electrostatic Confinement is capable of containing a nuclear fusion plasma in a series of virtual potential wells. These wells would substantially increase plasma confinement, possibly leading towards a high-gain, breakthrough fusion device. A one-foot in diameter IEC vessel was borrowed from the Fusion Studies Laboratory at the University of Illinois@Urbana-Champaign for the summer. This device was used in initial parameterization studies in order to design a larger, actively cooled device for permanent use at the PRC.

  6. Inertial-Electrostatic Confinement (IEC) Fusion For Space Propulsion

    NASA Technical Reports Server (NTRS)

    Nadler, Jon

    1999-01-01

    An Inertial-Electrostatic Confinement (IEC) device was assembled at the Marshall Space Flight Center (MSFC) Propulsion Research Center (PRC) to study the possibility of using IEC technology for deep space propulsion and power. Inertial-Electrostatic Confinement is capable of containing a nuclear fusion plasma in a series of virtual potential wells. These wells would substantially increase plasma confinement, possibly leading towards a high-gain, breakthrough fusion device. A one-foot in diameter IEC vessel was borrowed from the Fusion Studies Laboratory at the University of Illinois @ Urbana-Champaign for the summer. This device was used in initial parameterization studies in order to design a larger, actively cooled device for permanent use at the PRC.

  7. Chasing a Comet with a Solar Sail

    NASA Technical Reports Server (NTRS)

    Stough, Robert W.; Heaton, Andrew F.; Whorton, Mark S.

    2008-01-01

    Solar sail propulsion systems enable a wide range of missions that require constant thrust or high delta-V over long mission times. One particularly challenging mission type is a comet rendezvous mission. This paper presents optimal low-thrust trajectory designs for a range of sailcraft performance metrics and mission transit times that enables a comet rendezvous mission. These optimal trajectory results provide a trade space which can be parameterized in terms of mission duration and sailcraft performance parameters such that a design space for a small satellite comet chaser mission is identified. These results show that a feasible space exists for a small satellite to perform a comet chaser mission in a reasonable mission time.

  8. Atmospheric CO2 Concentration Measurements with Clouds from an Airborne Lidar

    NASA Astrophysics Data System (ADS)

    Mao, J.; Abshire, J. B.; Kawa, S. R.; Riris, H.; Allan, G. R.; Hasselbrack, W. E.; Numata, K.; Chen, J. R.; Sun, X.; DiGangi, J. P.; Choi, Y.

    2017-12-01

    Globally distributed atmospheric CO2 concentration measurements with high precision, low bias and full seasonal sampling are crucial to advance carbon cycle sciences. However, two thirds of the Earth's surface is typically covered by clouds, and passive remote sensing approaches from space are limited to cloud-free scenes. NASA Goddard is developing a pulsed, integrated-path differential absorption (IPDA) lidar approach to measure atmospheric column CO2 concentrations, XCO2, from space as a candidate for NASA's ASCENDS mission. Measurements of time-resolved laser backscatter profiles from the atmosphere also allow this technique to estimate XCO2 and range to cloud tops in addition to those to the ground with precise knowledge of the photon path-length. We demonstrate this measurement capability using airborne lidar measurements from summer 2017 ASCENDS airborne science campaign in Alaska. We show retrievals of XCO2 to ground and to a variety of cloud tops. We will also demonstrate how the partial column XCO2 to cloud tops and cloud slicing approach help resolving vertical and horizontal gradient of CO2 in cloudy conditions. The XCO2 retrievals from the lidar are validated against in situ measurements and compared to the Goddard Parameterized Chemistry Transport Model (PCTM) simulations. Adding this measurement capability to the future lidar mission for XCO2 will provide full global and seasonal data coverage and some information about vertical structure of CO2. This unique facility is expected to benefit atmospheric transport process studies, carbon data assimilation in models, and global and regional carbon flux estimation.

  9. A synergic simulation-optimization approach for analyzing biomolecular dynamics in living organisms.

    PubMed

    Sadegh Zadeh, Kouroush

    2011-01-01

    A synergic duo simulation-optimization approach was developed and implemented to study protein-substrate dynamics and binding kinetics in living organisms. The forward problem is a system of several coupled nonlinear partial differential equations which, with a given set of kinetics and diffusion parameters, can provide not only the commonly used bleached area-averaged time series in fluorescence microscopy experiments but more informative full biomolecular/drug space-time series and can be successfully used to study dynamics of both Dirac and Gaussian fluorescence-labeled biomacromolecules in vivo. The incomplete Cholesky preconditioner was coupled with the finite difference discretization scheme and an adaptive time-stepping strategy to solve the forward problem. The proposed approach was validated with analytical as well as reference solutions and used to simulate dynamics of GFP-tagged glucocorticoid receptor (GFP-GR) in mouse cancer cell during a fluorescence recovery after photobleaching experiment. Model analysis indicates that the commonly practiced bleach spot-averaged time series is not an efficient approach to extract physiological information from the fluorescence microscopy protocols. It was recommended that experimental biophysicists should use full space-time series, resulting from experimental protocols, to study dynamics of biomacromolecules and drugs in living organisms. It was also concluded that in parameterization of biological mass transfer processes, setting the norm of the gradient of the penalty function at the solution to zero is not an efficient stopping rule to end the inverse algorithm. Theoreticians should use multi-criteria stopping rules to quantify model parameters by optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Turbulence-driven coronal heating and improvements to empirical forecasting of the solar wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolsey, Lauren N.; Cranmer, Steven R.

    Forecasting models of the solar wind often rely on simple parameterizations of the magnetic field that ignore the effects of the full magnetic field geometry. In this paper, we present the results of two solar wind prediction models that consider the full magnetic field profile and include the effects of Alfvén waves on coronal heating and wind acceleration. The one-dimensional magnetohydrodynamic code ZEPHYR self-consistently finds solar wind solutions without the need for empirical heating functions. Another one-dimensional code, introduced in this paper (The Efficient Modified-Parker-Equation-Solving Tool, TEMPEST), can act as a smaller, stand-alone code for use in forecasting pipelines. TEMPESTmore » is written in Python and will become a publicly available library of functions that is easy to adapt and expand. We discuss important relations between the magnetic field profile and properties of the solar wind that can be used to independently validate prediction models. ZEPHYR provides the foundation and calibration for TEMPEST, and ultimately we will use these models to predict observations and explain space weather created by the bulk solar wind. We are able to reproduce with both models the general anticorrelation seen in comparisons of observed wind speed at 1 AU and the flux tube expansion factor. There is significantly less spread than comparing the results of the two models than between ZEPHYR and a traditional flux tube expansion relation. We suggest that the new code, TEMPEST, will become a valuable tool in the forecasting of space weather.« less

  11. High Resolution Climate Modeling of the Water Cycle over the Western United States Including Potential Climate Change Impacts

    NASA Astrophysics Data System (ADS)

    Rasmussen, R.; Liu, C.; Ikeda, K.

    2016-12-01

    The NCAR Water System program strives to improve the full representation of the water cycle in both regional and global models. Our previous high-resolution simulations using the WRF model over the Rocky Mountains revealed that proper spatial and temporal depiction of snowfall adequate for water resource and climate change purposes can be achieved with the appropriate choice of model grid spacing (< 6 km horizontal) and parameterizations. The climate sensitivity experiment consistent with expected climate change showed an altered hydrological cycle with increased fraction of rain versus snow, increased snowfall at high altitudes, earlier melting of snowpack, and decreased total runoff. In order to investigate regional differences between the Rockies and other major mountain barriers and to study climate change impacts over other regions of the contiguous U.S. (CONUS), we have expanded our prior CO Headwaters modeling study to encompass most of North America at a horizontal grid spacing of 4 km (see figure below). A domain expansion provides the opportunity to assess changes in orographic precipitation across different mountain ranges in the western USA. This study will examine the water cycle over Western U.S. seven U.S. mountain ranges, including likely changes to amount of snowpack and spring melt-off, critical to agriculture in the western U.S.

  12. The metric on field space, functional renormalization, and metric–torsion quantum gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reuter, Martin, E-mail: reuter@thep.physik.uni-mainz.de; Schollmeyer, Gregor M., E-mail: schollmeyer@thep.physik.uni-mainz.de

    Searching for new non-perturbatively renormalizable quantum gravity theories, functional renormalization group (RG) flows are studied on a theory space of action functionals depending on the metric and the torsion tensor, the latter parameterized by three irreducible component fields. A detailed comparison with Quantum Einstein–Cartan Gravity (QECG), Quantum Einstein Gravity (QEG), and “tetrad-only” gravity, all based on different theory spaces, is performed. It is demonstrated that, over a generic theory space, the construction of a functional RG equation (FRGE) for the effective average action requires the specification of a metric on the infinite-dimensional field manifold as an additional input. A modifiedmore » FRGE is obtained if this metric is scale-dependent, as it happens in the metric–torsion system considered.« less

  13. Phenomenological Modeling of Infrared Sources: Recent Advances

    NASA Technical Reports Server (NTRS)

    Leung, Chun Ming; Kwok, Sun (Editor)

    1993-01-01

    Infrared observations from planned space facilities (e.g., ISO (Infrared Space Observatory), SIRTF (Space Infrared Telescope Facility)) will yield a large and uniform sample of high-quality data from both photometric and spectroscopic measurements. To maximize the scientific returns of these space missions, complementary theoretical studies must be undertaken to interpret these observations. A crucial step in such studies is the construction of phenomenological models in which we parameterize the observed radiation characteristics in terms of the physical source properties. In the last decade, models with increasing degree of physical realism (in terms of grain properties, physical processes, and source geometry) have been constructed for infrared sources. Here we review current capabilities available in the phenomenological modeling of infrared sources and discuss briefly directions for future research in this area.

  14. Adaptive multiconfigurational wave functions.

    PubMed

    Evangelista, Francesco A

    2014-03-28

    A method is suggested to build simple multiconfigurational wave functions specified uniquely by an energy cutoff Λ. These are constructed from a model space containing determinants with energy relative to that of the most stable determinant no greater than Λ. The resulting Λ-CI wave function is adaptive, being able to represent both single-reference and multireference electronic states. We also consider a more compact wave function parameterization (Λ+SD-CI), which is based on a small Λ-CI reference and adds a selection of all the singly and doubly excited determinants generated from it. We report two heuristic algorithms to build Λ-CI wave functions. The first is based on an approximate prescreening of the full configuration interaction space, while the second performs a breadth-first search coupled with pruning. The Λ-CI and Λ+SD-CI approaches are used to compute the dissociation curve of N2 and the potential energy curves for the first three singlet states of C2. Special attention is paid to the issue of energy discontinuities caused by changes in the size of the Λ-CI wave function along the potential energy curve. This problem is shown to be solvable by smoothing the matrix elements of the Hamiltonian. Our last example, involving the Cu2O2(2+) core, illustrates an alternative use of the Λ-CI method: as a tool to both estimate the multireference character of a wave function and to create a compact model space to be used in subsequent high-level multireference coupled cluster computations.

  15. Optical Characterization of Deep-Space Object Rotation States

    DTIC Science & Technology

    2014-09-01

    surface bi-directional reflectance distribution function ( BRDF ), and then estimate the asteroid’s shape via a best-fit parameterized model . This hybrid...approach can be used because asteroid BRDFs are relatively well studied, but their shapes are generally unknown [17]. Asteroid shape models range...can be accomplished using a shape-dependent method that employs a model of the shape and reflectance characteristics of the object. Our analysis

  16. Changes in organic aerosol composition with aging inferred from aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    Ng, N. L.; Canagaratna, M. R.; Jimenez, J. L.; Chhabra, P. S.; Seinfeld, J. H.; Worsnop, D. R.

    2011-07-01

    Organic aerosols (OA) can be separated with factor analysis of aerosol mass spectrometer (AMS) data into hydrocarbon-like OA (HOA) and oxygenated OA (OOA). We develop a new method to parameterize H:C of OOA in terms of f43 (ratio of m/z 43, mostly C2H3O+, to total signal in the component mass spectrum). Such parameterization allows for the transformation of large database of ambient OOA components from the f44 (mostly CO2+, likely from acid groups) vs. f43 space ("triangle plot") (Ng et al., 2010) into the Van Krevelen diagram (H:C vs. O:C) (Van Krevelen, 1950). Heald et al. (2010) examined the evolution of total OA in the Van Krevelen diagram. In this work total OA is deconvolved into components that correspond to primary (HOA and others) and secondary (OOA) organic aerosols. By deconvolving total OA into different components, we remove physical mixing effects between secondary and primary aerosols which allows for examination of the evolution of OOA components alone in the Van Krevelen space. This provides a unique means of following ambient secondary OA evolution that is analogous to and can be compared with trends observed in chamber studies of secondary organic aerosol formation. The triangle plot in Ng et al. (2010) indicates that f44 of OOA components increases with photochemical age, suggesting the importance of acid formation in OOA evolution. Once they are transformed with the new parameterization, the triangle plot of the OOA components from all sites occupy an area in Van Krevelen space which follows a ΔH:C/ΔO:C slope of ~ -0.5. This slope suggests that ambient OOA aging results in net changes in chemical composition that are equivalent to the addition of both acid and alcohol/peroxide functional groups without fragmentation (i.e. C-C bond breakage), and/or the addition of acid groups with fragmentation. These results provide a framework for linking the bulk aerosol chemical composition evolution to molecular-level studies.

  17. Sensitivities of Summertime Mesoscale Circulations in the Coastal Carolinas to Modifications of the Kain-Fritsch Cumulus Parameterization.

    PubMed

    Sims, Aaron P; Alapaty, Kiran; Raman, Sethu

    2017-01-01

    Two mesoscale circulations, the Sandhills circulation and the sea breeze, influence the initiation of deep convection over the Sandhills and the coast in the Carolinas during the summer months. The interaction of these two circulations causes additional convection in this coastal region. Accurate representation of mesoscale convection is difficult as numerical models have problems with the prediction of the timing, amount, and location of precipitation. To address this issue, the authors have incorporated modifications to the Kain-Fritsch (KF) convective parameterization scheme and evaluated these mesoscale interactions using a high-resolution numerical model. The modifications include changes to the subgrid-scale cloud formulation, the convective turnover time scale, and the formulation of the updraft entrainment rates. The use of a grid-scaling adjustment parameter modulates the impact of the KF scheme as a function of the horizontal grid spacing used in a simulation. Results indicate that the impact of this modified cumulus parameterization scheme is more effective on domains with coarser grid sizes. Other results include a decrease in surface and near-surface temperatures in areas of deep convection (due to the inclusion of the effects of subgrid-scale clouds on the radiation), improvement in the timing of convection, and an increase in the strength of deep convection.

  18. Circuit Design Optimization Using Genetic Algorithm with Parameterized Uniform Crossover

    NASA Astrophysics Data System (ADS)

    Bao, Zhiguo; Watanabe, Takahiro

    Evolvable hardware (EHW) is a new research field about the use of Evolutionary Algorithms (EAs) to construct electronic systems. EHW refers in a narrow sense to use evolutionary mechanisms as the algorithmic drivers for system design, while in a general sense to the capability of the hardware system to develop and to improve itself. Genetic Algorithm (GA) is one of typical EAs. We propose optimal circuit design by using GA with parameterized uniform crossover (GApuc) and with fitness function composed of circuit complexity, power, and signal delay. Parameterized uniform crossover is much more likely to distribute its disruptive trials in an unbiased manner over larger portions of the space, then it has more exploratory power than one and two-point crossover, so we have more chances of finding better solutions. Its effectiveness is shown by experiments. From the results, we can see that the best elite fitness, the average value of fitness of the correct circuits and the number of the correct circuits of GApuc are better than that of GA with one-point crossover or two-point crossover. The best case of optimal circuits generated by GApuc is 10.18% and 6.08% better in evaluating value than that by GA with one-point crossover and two-point crossover, respectively.

  19. A Comparison between High-Energy Radiation Background Models and SPENVIS Trapped-Particle Radiation Models

    NASA Technical Reports Server (NTRS)

    Krizmanic, John F.

    2013-01-01

    We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.

  20. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model

    NASA Astrophysics Data System (ADS)

    Sun, Shoutian; Ramu Ramachandran, Bala; Wick, Collin D.

    2018-02-01

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl’s surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  1. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model.

    PubMed

    Sun, Shoutian; Ramachandran, Bala Ramu; Wick, Collin D

    2018-02-21

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl's surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  2. The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron

    2015-03-01

    In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.

  3. Physically-based parameterization of spatially variable soil and vegetation using satellite multispectral data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1989-01-01

    A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.

  4. The Unbiased Velocity Distribution of Neutron Stars from a Simulation of Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Arzoumanian, Z.; Cordes, J. M.; Chernoff, D.

    1997-12-01

    We present the results of a new simulation of the Galactic population of neutron stars: their birthrate, velocity distribution, luminosities, beaming characteristics, and spin evolution. The many simulations in the literature differ from one another primarily in their treatment of the selection effects associated with pulsar detection. Our method, the most realistic to date, goes beyond earlier efforts by retaining the full kinematic, rotational, luminosity, and beaming evolution of each simulated star: ``Monte-Carlo'' neutron stars are created according to assumed distributions (at birth) in spatial coordinates, kick velocity, and magnitudes and orientations of the spin and magnetic field vectors. The neutron stars spin down following an assumed braking law, and their Galactic trajectories are traced to the present epoch. For each star, a pulse waveform is generated using a phenomenological radio-beam model, obviating the need for an arbitrary beaming fraction. Luminosity is assumed to be a parameterized function of period and spin-down rate, with no intrinsic spread, and a parameterized death-line is applied. Interstellar dispersion and scattering consistent with survey instrumentation and the galactic locales of the neutron stars are applied to the pulse waveforms, which are Fourier analyzed and tested for detection following the techniques of real-world surveys. A unique algorithm is used to compare the populations of simulated and known, non-millisecond, pulsars in the multi-dimensional space of observables (any subset of galactic coordinates, dispersion measure, period, spin-down rate, flux, and proper motion). Model parameters are varied, and statistically independent neutron star populations are created until a maximum likelihood model is found. The highlight of this effort is an unbiased determination of the velocity distribution of neutron stars. We discuss the implications of our results for supernova physics, binary evolution, and the nature of gamma -ray transients.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin's maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin`s maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  7. Building alternate protein structures using the elastic network model.

    PubMed

    Yang, Qingyi; Sharp, Kim A

    2009-02-15

    We describe a method for efficiently generating ensembles of alternate, all-atom protein structures that (a) differ significantly from the starting structure, (b) have good stereochemistry (bonded geometry), and (c) have good steric properties (absence of atomic overlap). The method uses reconstruction from a series of backbone framework structures that are obtained from a modified elastic network model (ENM) by perturbation along low-frequency normal modes. To ensure good quality backbone frameworks, the single force parameter ENM is modified by introducing two more force parameters to characterize the interaction between the consecutive carbon alphas and those within the same secondary structure domain. The relative stiffness of the three parameters is parameterized to reproduce B-factors, while maintaining good bonded geometry. After parameterization, violations of experimental Calpha-Calpha distances and Calpha-Calpha-Calpha pseudo angles along the backbone are reduced to less than 1%. Simultaneously, the average B-factor correlation coefficient improves to R = 0.77. Two applications illustrate the potential of the approach. (1) 102,051 protein backbones spanning a conformational space of 15 A root mean square deviation were generated from 148 nonredundant proteins in the PDB database, and all-atom models with minimal bonded and nonbonded violations were produced from this ensemble of backbone structures using the SCWRL side chain building program. (2) Improved backbone templates for homology modeling. Fifteen query sequences were each modeled on two targets. For each of the 30 target frameworks, dozens of improved templates could be produced In all cases, improved full atom homology models resulted, of which 50% could be identified blind using the D-Fire statistical potential. (c) 2008 Wiley-Liss, Inc.

  8. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    NASA Astrophysics Data System (ADS)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  9. Holistic approach for automated background EEG assessment in asphyxiated full-term infants

    NASA Astrophysics Data System (ADS)

    Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten

    2014-12-01

    Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.

  10. LAMPS software and mesoscale prediction studies

    NASA Technical Reports Server (NTRS)

    Perkey, D. J.

    1985-01-01

    The full-physics version of the LAMPS model has been implemented on the Perkin-Elmer computer. In Addition, LAMPS graphics processors have been rewritten to the run on the Perkin-Elmer and they are currently undergoing final testing. Numerical experiments investigating the impact of convective parameterized latent heat release on the evolution of a precipitating storm have been performed and the results are currently being evaluated. Curent efforts include the continued evaluation of the impact of initial conditions on LAMPS model results. This work will help define measurement requirements for future research field projects as well as for observations in support of operational forecasts. Also, the impact of parameterized latent heat on the evolution of precipitating systems is continuing. This research is in support of NASA's proposed Earth Observation Mission (EOM).

  11. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thatcher, Diana R.; Jablonowski, Christiane

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  12. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE PAGES

    Thatcher, Diana R.; Jablonowski, Christiane

    2016-04-04

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  13. Removing Shape-Preserving Transformations in Square-Root Elastic (SRE) Framework for Shape Analysis of Curves

    PubMed Central

    Joshi, Shantanu H.; Klassen, Eric; Srivastava, Anuj; Jermyn, Ian

    2011-01-01

    This paper illustrates and extends an efficient framework, called the square-root-elastic (SRE) framework, for studying shapes of closed curves, that was first introduced in [2]. This framework combines the strengths of two important ideas - elastic shape metric and path-straightening methods - for finding geodesics in shape spaces of curves. The elastic metric allows for optimal matching of features between curves while path-straightening ensures that the algorithm results in geodesic paths. This paper extends this framework by removing two important shape preserving transformations: rotations and re-parameterizations, by forming quotient spaces and constructing geodesics on these quotient spaces. These ideas are demonstrated using experiments involving 2D and 3D curves. PMID:21738385

  14. Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon

    2004-01-01

    This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.

  15. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  16. Grassmann matrix quantum mechanics

    DOE PAGES

    Anninos, Dionysios; Denef, Frederik; Monten, Ruben

    2016-04-21

    We explore quantum mechanical theories whose fundamental degrees of freedom are rectangular matrices with Grassmann valued matrix elements. We study particular models where the low energy sector can be described in terms of a bosonic Hermitian matrix quantum mechanics. We describe the classical curved phase space that emerges in the low energy sector. The phase space lives on a compact Kähler manifold parameterized by a complex matrix, of the type discovered some time ago by Berezin. The emergence of a semiclassical bosonic matrix quantum mechanics at low energies requires that the original Grassmann matrices be in the long rectangular limit.more » In conclusion, we discuss possible holographic interpretations of such matrix models which, by construction, are endowed with a finite dimensional Hilbert space.« less

  17. Application of New Chorus Wave Model from Van Allen Probe Observations in Earth's Radiation Belt Modeling

    NASA Astrophysics Data System (ADS)

    Wang, D.; Shprits, Y.; Spasojevic, M.; Zhu, H.; Aseev, N.; Drozdov, A.; Kellerman, A. C.

    2017-12-01

    In situ satellite observations, theoretical studies and model simulations suggested that chorus waves play a significant role in the dynamic evolution of relativistic electrons in the Earth's radiation belts. In this study, we developed new wave frequency and amplitude models that depend on Magnetic Local Time (MLT)-, L-shell, latitude- and geomagnetic conditions indexed by Kp for upper-band and lower-band chorus waves using measurements from the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrument onboard the Van Allen Probes. Utilizing the quasi-linear full diffusion code, we calculated corresponding diffusion coefficients in each MLT sector (1 hour resolution) for upper-band and lower-band chorus waves according to the new developed wave models. Compared with former parameterizations of chorus waves, the new parameterizations result in differences in diffusion coefficients that depend on energy and pitch angle. Utilizing obtained diffusion coefficients, lifetime of energetic electrons is parameterized accordingly. In addition, to investigate effects of obtained diffusion coefficients in different MLT sectors and under different geomagnetic conditions, we performed simulations using four-dimensional Versatile Electron Radiation Belt simulations and validated results against observations.

  18. Scanning Backscatter Lidar Observations for Characterizing 4-D Cloud and Aerosol Fields to Improve Radiative Transfer Parameterizations

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary K.; Miller, David O.

    2005-01-01

    Clouds have a powerful influence on atmospheric radiative transfer and hence are crucial to understanding and interpreting the exchange of radiation between the Earth's surface, the atmosphere, and space. Because clouds are highly variable in space, time and physical makeup, it is important to be able to observe them in three dimensions (3-D) with sufficient resolution that the data can be used to generate and validate parameterizations of cloud fields at the resolution scale of global climate models (GCMs). Simulation of photon transport in three dimensionally inhomogeneous cloud fields show that spatial inhomogeneities tend to decrease cloud reflection and absorption and increase direct and diffuse transmission, Therefore it is an important task to characterize cloud spatial structures in three dimensions on the scale of GCM grid elements. In order to validate cloud parameterizations that represent the ensemble, or mean and variance of cloud properties within a GCM grid element, measurements of the parameters must be obtained on a much finer scale so that the statistics on those measurements are truly representative. High spatial sampling resolution is required, on the order of 1 km or less. Since the radiation fields respond almost instantaneously to changes in the cloud field, and clouds changes occur on scales of seconds and less when viewed on scales of approximately 100m, the temporal resolution of cloud properties should be measured and characterized on second time scales. GCM time steps are typically on the order of an hour, but in order to obtain sufficient statistical representations of cloud properties in the parameterizations that are used as model inputs, averaged values of cloud properties should be calculated on time scales on the order of 10-100 s. The Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE) provides exceptional temporal (100 ms) and spatial (30 m) resolution measurements of aerosol and cloud backscatter in three dimensions. HARLIE was used in a ground-based configuration in several recent field campaigns. Principal data products include aerosol backscatter profiles, boundary layer heights, entrainment zone thickness, cloud fraction as a function of altitude and horizontal wind vector profiles based on correlating the motions of clouds and aerosol structures across portions of the scan. Comparisons will be made between various cloud detecting instruments to develop a baseline performance metric.

  19. Scale dependency of regional climate modeling of current and future climate extremes in Germany

    NASA Astrophysics Data System (ADS)

    Tölle, Merja H.; Schefczyk, Lukas; Gutjahr, Oliver

    2017-11-01

    A warmer climate is projected for mid-Europe, with less precipitation in summer, but with intensified extremes of precipitation and near-surface temperature. However, the extent and magnitude of such changes are associated with creditable uncertainty because of the limitations of model resolution and parameterizations. Here, we present the results of convection-permitting regional climate model simulations for Germany integrated with the COSMO-CLM using a horizontal grid spacing of 1.3 km, and additional 4.5- and 7-km simulations with convection parameterized. Of particular interest is how the temperature and precipitation fields and their extremes depend on the horizontal resolution for current and future climate conditions. The spatial variability of precipitation increases with resolution because of more realistic orography and physical parameterizations, but values are overestimated in summer and over mountain ridges in all simulations compared to observations. The spatial variability of temperature is improved at a resolution of 1.3 km, but the results are cold-biased, especially in summer. The increase in resolution from 7/4.5 km to 1.3 km is accompanied by less future warming in summer by 1 ∘C. Modeled future precipitation extremes will be more severe, and temperature extremes will not exclusively increase with higher resolution. Although the differences between the resolutions considered (7/4.5 km and 1.3 km) are small, we find that the differences in the changes in extremes are large. High-resolution simulations require further studies, with effective parameterizations and tunings for different topographic regions. Impact models and assessment studies may benefit from such high-resolution model results, but should account for the impact of model resolution on model processes and climate change.

  20. Evaluation of NASA GISS post-CMIP5 single column model simulated clouds and precipitation using ARM Southern Great Plains observations

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Dong, Xiquan; Kennedy, Aaron; Xi, Baike; Li, Zhanqing

    2017-03-01

    The planetary boundary layer turbulence and moist convection parameterizations have been modified recently in the NASA Goddard Institute for Space Studies (GISS) Model E2 atmospheric general circulation model (GCM; post-CMIP5, hereafter P5). In this study, single column model (SCM P5) simulated cloud fractions (CFs), cloud liquid water paths (LWPs) and precipitation were compared with Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) groundbased observations made during the period 2002-08. CMIP5 SCM simulations and GCM outputs over the ARM SGP region were also used in the comparison to identify whether the causes of cloud and precipitation biases resulted from either the physical parameterization or the dynamic scheme. The comparison showed that the CMIP5 SCM has difficulties in simulating the vertical structure and seasonal variation of low-level clouds. The new scheme implemented in the turbulence parameterization led to significantly improved cloud simulations in P5. It was found that the SCM is sensitive to the relaxation time scale. When the relaxation time increased from 3 to 24 h, SCM P5-simulated CFs and LWPs showed a moderate increase (10%-20%) but precipitation increased significantly (56%), which agreed better with observations despite the less accurate atmospheric state. Annual averages among the GCM and SCM simulations were almost the same, but their respective seasonal variations were out of phase. This suggests that the same physical cloud parameterization can generate similar statistical results over a long time period, but different dynamics drive the differences in seasonal variations. This study can potentially provide guidance for the further development of the GISS model.

  1. A physically-based approach of treating dust-water cloud interactions in climate models

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Karydis, V.; Barahona, D.; Sokolik, I. N.; Nenes, A.

    2011-12-01

    All aerosol-cloud-climate assessment studies to date assume that the ability of dust (and other insoluble species) to act as a Cloud Condensation Nuclei (CCN) is determined solely by their dry size and amount of soluble material. Recent evidence however clearly shows that dust can act as efficient CCN (even if lacking appreciable amounts of soluble material) through adsorption of water vapor onto the surface of the particle. This "inherent" CCN activity is augmented as the dust accumulates soluble material through atmospheric aging. A comprehensive treatment of dust-cloud interactions therefore requires including both of these sources of CCN activity in atmospheric models. This study presents a "unified" theory of CCN activity that considers both effects of adsorption and solute. The theory is corroborated and constrained with experiments of CCN activity of mineral aerosols generated from clays, calcite, quartz, dry lake beds and desert soil samples from Northern Africa, East Asia/China, and Northern America. The unified activation theory then is included within the mechanistic droplet activation parameterization of Kumar et al. (2009) (including the giant CCN correction of Barahona et al., 2010), for a comprehensive treatment of dust impacts on global CCN and cloud droplet number. The parameterization is demonstrated with the NASA Global Modeling Initiative (GMI) Chemical Transport Model using wind fields computed with the Goddard Institute for Space Studies (GISS) general circulation model. References Barahona, D. et al. (2010) Comprehensively Accounting for the Effect of Giant CCN in Cloud Activation Parameterizations, Atmos.Chem.Phys., 10, 2467-2473 Kumar, P., I.N. Sokolik, and A. Nenes (2009), Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN, Atmos.Chem.Phys., 9, 2517- 2532

  2. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.

  3. Trajectory Optimization for Helicopter Unmanned Aerial Vehicles (UAVs)

    DTIC Science & Technology

    2010-06-01

    the Nth-order derivative of the Legendre Polynomial ( )NL t . Using this method, the range of integration is transformed universally to [-1,+1...which is the interval for Legendre Polynomials . Although the LGL interpolation points are not evenly spaced, they are symmetric about the midpoint 0...the vehicle’s kinematic constraints are parameterized in terms of polynomials of sufficient order, (2) A collision-free criterion is developed and

  4. Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals

    NASA Technical Reports Server (NTRS)

    Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre

    2017-01-01

    The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.

  5. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  6. Assessment of fine-scale parameterizations of turbulent dissipation rates in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Takahashi, A.; Hibiya, T.

    2016-12-01

    To sustain the global overturning circulation, more mixing is required in the ocean than has been observed. The most likely candidates for this missing mixing are breaking of wind-induced near-inertial waves and bottom-generated internal lee waves in the sparsely observed Southern Ocean. Nevertheless, there is a paucity of direct microstructure measurements in the Southern Ocean where energy dissipation rates have been estimated mostly using fine-scale parameterizations. In this study, we assess the validity of the existing fine-scale parameterizations in the Antarctic Circumpolar Current (ACC) region using the data obtained from simultaneous full-depth measurements of micro-scale turbulence and fine-scale shear/strain carried out south of Australia during January 17 to February 2, 2016. Although the fine-scale shear/strain ratio (Rω) is close to the Garrett-Munk (GM) value at the station north of Subtropical Front, the values of Rω at the stations south of Subantarctic Front well exceed the GM value, suggesting that the local internal wave spectra are significantly biased to lower frequencies. We find that not all of the observed energy dissipation rates at these locations are well predicted using Gregg-Henyey-Polzin (GHP; Gregg et al., 2003) and Ijichi-Hibiya (IH; Ijichi and Hibiya, 2015) parameterizations, both of which take into account the spectral distortion in terms of Rω; energy dissipation rates at some locations are obviously overestimated by GHP and IH, although only the strain-based Wijesekera (Wijesekera et al., 1993) parameterization yields fairly good predictions. One possible explanation for this result is that a significant portion of the observed shear variance at these locations might be attributed to kinetic-energy-dominant small-scale eddies associated with the ACC, so that fine-scale strain rather than Rω becomes a more appropriate parameter to characterize the actual internal wave field.

  7. Electromagnetic processes in nucleus-nucleus collisions relating to space radiation research

    NASA Technical Reports Server (NTRS)

    Norbury, John W.

    1992-01-01

    Most of the papers within this report deal with electromagnetic processes in nucleus-nucleus collisions which are of concern in the space radiation program. In particular, the removal of one and two nucleons via both electromagnetic and strong interaction processes has been extensively investigated. The theory of relativistic Coulomb fission has also been developed. Several papers on quark models also appear. Finally, note that the theoretical methods developed in this work have been directly applied to the task of radiation protection of astronauts. This has been done by parameterizing the theoretical formalism in such a fashion that it can be used in cosmic ray transport codes.

  8. Advanced local area network concepts

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1985-01-01

    Development of a good model of the data traffic requirements for Local Area Networks (LANs) onboard the Space Station is the driving problem in this work. A parameterized workload model is under development. An analysis contract has been started specifically to capture the distributed processing requirements for the Space Station and then to develop a top level model to simulate how various processing scenarios can handle the workload and what data communication patterns result. A summary of the Local Area Network Extendsible Simulator 2 Requirements Specification and excerpts from a grant report on the topological design of fiber optic local area networks with application to Expressnet are given.

  9. Spectral bidirectional reflectance of Antarctic snow: Measurements and parameterization

    NASA Astrophysics Data System (ADS)

    Hudson, Stephen R.; Warren, Stephen G.; Brandt, Richard E.; Grenfell, Thomas C.; Six, Delphine

    2006-09-01

    The bidirectional reflectance distribution function (BRDF) of snow was measured from a 32-m tower at Dome C, at latitude 75°S on the East Antarctic Plateau. These measurements were made at 96 solar zenith angles between 51° and 87° and cover wavelengths 350-2400 nm, with 3- to 30-nm resolution, over the full range of viewing geometry. The BRDF at 900 nm had previously been measured at the South Pole; the Dome C measurement at that wavelength is similar. At both locations the natural roughness of the snow surface causes the anisotropy of the BRDF to be less than that of flat snow. The inherent BRDF of the snow is nearly constant in the high-albedo part of the spectrum (350-900 nm), but the angular distribution of reflected radiance becomes more isotropic at the shorter wavelengths because of atmospheric Rayleigh scattering. Parameterizations were developed for the anisotropic reflectance factor using a small number of empirical orthogonal functions. Because the reflectance is more anisotropic at wavelengths at which ice is more absorptive, albedo rather than wavelength is used as a predictor in the near infrared. The parameterizations cover nearly all viewing angles and are applicable to the high parts of the Antarctic Plateau that have small surface roughness and, at viewing zenith angles less than 55°, elsewhere on the plateau, where larger surface roughness affects the BRDF at larger viewing angles. The root-mean-squared error of the parameterized reflectances is between 2% and 4% at wavelengths less than 1400 nm and between 5% and 8% at longer wavelengths.

  10. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    PubMed

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. A Nonlinear Interactions Approximation Model for Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Haliloglu, Mehmet U.; Akhavan, Rayhaneh

    2003-11-01

    A new approach to LES modelling is proposed based on direct approximation of the nonlinear terms \\overlineu_iuj in the filtered Navier-Stokes equations, instead of the subgrid-scale stress, τ_ij. The proposed model, which we call the Nonlinear Interactions Approximation (NIA) model, uses graded filters and deconvolution to parameterize the local interactions across the LES cutoff, and a Smagorinsky eddy viscosity term to parameterize the distant interactions. A dynamic procedure is used to determine the unknown eddy viscosity coefficient, rendering the model free of adjustable parameters. The proposed NIA model has been applied to LES of turbulent channel flows at Re_τ ≈ 210 and Re_τ ≈ 570. The results show good agreement with DNS not only for the mean and resolved second-order turbulence statistics but also for the full (resolved plus subgrid) Reynolds stress and turbulence intensities.

  12. Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.

    2008-01-01

    Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.

  13. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  14. Relativistic astrophysics. [studies of gravitational radiation in asymptotic de sitter space and post Newtonian approximation

    NASA Technical Reports Server (NTRS)

    Smalley, L. L.

    1975-01-01

    The coordinate independence of gravitational radiation and the parameterized post-Newtonian approximation from which it is extended are described. The general consistency of the field equations with Bianchi identities, gauge conditions, and the Newtonian limit of the perfect fluid equations of hydrodynamics are studied. A technique of modification is indicated for application to vector-metric or double metric theories, as well as to scalar-tensor theories.

  15. Importance of Winds and Soil Moistures to the US Summertime Drought of 1988: A GCM Simulation Study

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    The climate version of NASA's GEOS 2 GCM did not simulate a realistic 1988 summertime drought in the central United States (Mocko et al., 1999). Despite several new upgrades to the model's parameterizations, as well as finer grid spacing from 4x5 degrees to 2x2.5 degrees, no significant improvements were noted in the model's simulation of the U.S. drought.

  16. Impact of physical parameterizations on idealized tropical cyclones in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Reed, K. A.; Jablonowski, C.

    2011-02-01

    This paper explores the impact of the physical parameterization suite on the evolution of an idealized tropical cyclone within the National Center for Atmospheric Research's (NCAR) Community Atmosphere Model (CAM). The CAM versions 3.1 and 4 are used to study the development of an initially weak vortex in an idealized environment over a 10-day simulation period within an aqua-planet setup. The main distinction between CAM 3.1 and CAM 4 lies within the physical parameterization of deep convection. CAM 4 now includes a dilute plume Convective Available Potential Energy (CAPE) calculation and Convective Momentum Transport (CMT). The finite-volume dynamical core with 26 vertical levels in aqua-planet mode is used at horizontal grid spacings of 1.0°, 0.5° and 0.25°. It is revealed that CAM 4 produces stronger and larger tropical cyclones by day 10 at all resolutions, with a much earlier onset of intensification when compared to CAM 3.1. At the highest resolution CAM 4 also accounts for changes in the storm's vertical structure, such as an increased outward slope of the wind contours with height, when compared to CAM 3.1. An investigation concludes that the new dilute CAPE calculation in CAM 4 is largely responsible for the changes observed in the development, strength and structure of the tropical cyclone.

  17. Speeding up low-mass planetary microlensing simulations and modeling: The caustic region of influence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penny, Matthew T., E-mail: penny@astronomy.ohio-state.edu

    2014-08-01

    Extensive simulations of planetary microlensing are necessary both before and after a survey is conducted: before to design and optimize the survey and after to understand its detection efficiency. The major bottleneck in such computations is the computation of light curves. However, for low-mass planets, most of these computations are wasteful, as most light curves do not contain detectable planetary signatures. In this paper, I develop a parameterization of the binary microlens that is conducive to avoiding light curve computations. I empirically find analytic expressions describing the limits of the parameter space that contain the vast majority of low-mass planetmore » detections. Through a large-scale simulation, I measure the (in)completeness of the parameterization and the speed-up it is possible to achieve. For Earth-mass planets in a wide range of orbits, it is possible to speed up simulations by a factor of ∼30-125 (depending on the survey's annual duty-cycle) at the cost of missing ∼1% of detections (which is actually a smaller loss than for the arbitrary parameter limits typically applied in microlensing simulations). The benefits of the parameterization probably outweigh the costs for planets below 100 M{sub ⊕}. For planets at the sensitivity limit of AFTA-WFIRST, simulation speed-ups of a factor ∼1000 or more are possible.« less

  18. Pair production in classical Stueckelberg-Horwitz-Piron electrodynamics

    NASA Astrophysics Data System (ADS)

    Land, Martin

    2015-05-01

    We calculate pair production from bremsstrahlung as a classical effect in Stueckelberg-Horwitz electrodynamics. In this framework, worldlines are traced out dynamically through the evolution of events xμ(τ) parameterized by a chronological time τ that is independent of the spacetime coordinates. These events, defined in an unconstrained 8D phase space, interact through five τ-dependent gauge fields induced by the event evolution. The resulting theory differs in its underlying mechanics from conventional electromagnetism, but coincides with Maxwell theory in an equilibrium limit. In particular, the total mass-energy-momentum of particles and fields is conserved, but the mass-shell constraint is lifted from individual interacting events, so that the Feynman-Stueckelberg interpretation of pair creation/annihilation is implemented in classical mechanics. We consider a three-stage interaction which when parameterized by the laboratory clock x0 appears as (1) particle-1 scatters on a heavy nucleus to produce bremsstrahlung, (2) the radiation field produces a particle/antiparticle pair, (3) the antiparticle is annihilated with particle-2 in the presence of a second heavy nucleus. When parameterized in chronological time τ, the underlying process develops as (1) particle-2 scatters on the second nucleus and begins evolving backward in time with negative energy, (2) particle-1 scatters on the first nucleus and releases bremsstrahlung, (3) particle-2 absorbs radiation which returns it to forward time evolution with positive energy.

  19. Application Analysis of BIM Technology in Metro Rail Transit

    NASA Astrophysics Data System (ADS)

    Liu, Bei; Sun, Xianbin

    2018-03-01

    With the rapid development of urban roads, especially the construction of subway rail transit, it is an effective way to alleviate urban traffic congestion. There are limited site space, complex resource allocation, tight schedule, underground pipeline complex engineering problems. BIM technology, three-dimensional visualization, parameterization, virtual simulation and many other advantages can effectively solve these technical problems. Based on the project of Shenzhen Metro Line 9, BIM technology is innovatively researched throughout the lifecycle of BIM technology in the context of the metro rail transit project rarely used at this stage. The model information file is imported into Navisworks for four-dimensional animation simulation to determine the optimum construction scheme of the shield machine. Subway construction management application platform based on BIM and private cloud technology, the use of cameras and sensors to achieve electronic integration, dynamic monitoring of the operation and maintenance of underground facilities. Make full use of the many advantages of BIM technology to improve the engineering quality and construction efficiency of the subway rail transit project and to complete the operation and maintenance.

  20. Adaptive multiconfigurational wave functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelista, Francesco A., E-mail: francesco.evangelista@emory.edu

    2014-03-28

    A method is suggested to build simple multiconfigurational wave functions specified uniquely by an energy cutoff Λ. These are constructed from a model space containing determinants with energy relative to that of the most stable determinant no greater than Λ. The resulting Λ-CI wave function is adaptive, being able to represent both single-reference and multireference electronic states. We also consider a more compact wave function parameterization (Λ+SD-CI), which is based on a small Λ-CI reference and adds a selection of all the singly and doubly excited determinants generated from it. We report two heuristic algorithms to build Λ-CI wave functions.more » The first is based on an approximate prescreening of the full configuration interaction space, while the second performs a breadth-first search coupled with pruning. The Λ-CI and Λ+SD-CI approaches are used to compute the dissociation curve of N{sub 2} and the potential energy curves for the first three singlet states of C{sub 2}. Special attention is paid to the issue of energy discontinuities caused by changes in the size of the Λ-CI wave function along the potential energy curve. This problem is shown to be solvable by smoothing the matrix elements of the Hamiltonian. Our last example, involving the Cu{sub 2}O{sub 2}{sup 2+} core, illustrates an alternative use of the Λ-CI method: as a tool to both estimate the multireference character of a wave function and to create a compact model space to be used in subsequent high-level multireference coupled cluster computations.« less

  1. Forcing variables in simulation of transpiration of water stressed plants determined by principal component analysis

    NASA Astrophysics Data System (ADS)

    Durigon, Angelica; Lier, Quirijn de Jong van; Metselaar, Klaas

    2016-10-01

    To date, measuring plant transpiration at canopy scale is laborious and its estimation by numerical modelling can be used to assess high time frequency data. When using the model by Jacobs (1994) to simulate transpiration of water stressed plants it needs to be reparametrized. We compare the importance of model variables affecting simulated transpiration of water stressed plants. A systematic literature review was performed to recover existing parameterizations to be tested in the model. Data from a field experiment with common bean under full and deficit irrigation were used to correlate estimations to forcing variables applying principal component analysis. New parameterizations resulted in a moderate reduction of prediction errors and in an increase in model performance. Ags model was sensitive to changes in the mesophyll conductance and leaf angle distribution parameterizations, allowing model improvement. Simulated transpiration could be separated in temporal components. Daily, afternoon depression and long-term components for the fully irrigated treatment were more related to atmospheric forcing variables (specific humidity deficit between stomata and air, relative air humidity and canopy temperature). Daily and afternoon depression components for the deficit-irrigated treatment were related to both atmospheric and soil dryness, and long-term component was related to soil dryness.

  2. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    NASA Astrophysics Data System (ADS)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  3. Parameterization and scaling of arctic ice conditions in the context of ice-atmospheric processes

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Steffen, K.; Heinrichs, J. F.; Key, J. R.; Maslanik, J. A.; Serreze, M. C.; Weaver, R. L.

    1995-01-01

    The goals of this project are to observe how the open water/thin ice fraction in a high-concentration ice pack responds to different short-period atmospheric forcings, and how this response is represented in different scales of observation. The objectives can be summarized as follows: determine the feasibility and accuracy of ice concentration and ice typing by ERS-1 SAR backscatter data, and whether SAR data might be used to calibrate concentration estimates from optical and massive-microwave sensors; investigate methods to integrate SAR data with other satellite data for turbulent heat flux parameterization at the ocean/atmosphere interface; determine how the development and evolution of open water/thin ice areas within the interior ice pack vary under different atmospheric synoptic regimes; compare how open-water/thin ice fractions estimated from large-area divergence measurements differ from fractions determined by summing localized openings in the pack; relate these questions of scale and process to methods of observation, modeling, and averaging over time and space.

  4. Equations on knot polynomials and 3d/5d duality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mironov, A.; Morozov, A.; ITEP, Moscow

    2012-09-24

    We briefly review the current situation with various relations between knot/braid polynomials (Chern-Simons correlation functions), ordinary and extended, considered as functions of the representation and of the knot topology. These include linear skein relations, quadratic Plucker relations, as well as 'differential' and (quantum) A-polynomial structures. We pay a special attention to identity between the A-polynomial equations for knots and Baxter equations for quantum relativistic integrable systems, related through Seiberg-Witten theory to 5d super-Yang-Mills models and through the AGT relation to the q-Virasoro algebra. This identity is an important ingredient of emerging a 3d- 5d generalization of the AGT relation. Themore » shape of the Baxter equation (including the values of coefficients) depend on the choice of the knot/braid. Thus, like the case of KP integrability, where (some, so far torus) knots parameterize particular points of the Universal Grassmannian, in this relation they parameterize particular points in the moduli space of many-body integrable systems of relativistic type.« less

  5. Estimation of vegetation photosynthetic capacity from space-based measurements of chlorophyll fluorescence for terrestrial biosphere models.

    PubMed

    Zhang, Yongguang; Guanter, Luis; Berry, Joseph A; Joiner, Joanna; van der Tol, Christiaan; Huete, Alfredo; Gitelson, Anatoly; Voigt, Maximilian; Köhler, Philipp

    2014-12-01

    Photosynthesis simulations by terrestrial biosphere models are usually based on the Farquhar's model, in which the maximum rate of carboxylation (Vcmax ) is a key control parameter of photosynthetic capacity. Even though Vcmax is known to vary substantially in space and time in response to environmental controls, it is typically parameterized in models with tabulated values associated to plant functional types. Remote sensing can be used to produce a spatially continuous and temporally resolved view on photosynthetic efficiency, but traditional vegetation observations based on spectral reflectance lack a direct link to plant photochemical processes. Alternatively, recent space-borne measurements of sun-induced chlorophyll fluorescence (SIF) can offer an observational constraint on photosynthesis simulations. Here, we show that top-of-canopy SIF measurements from space are sensitive to Vcmax at the ecosystem level, and present an approach to invert Vcmax from SIF data. We use the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model to derive empirical relationships between seasonal Vcmax and SIF which are used to solve the inverse problem. We evaluate our Vcmax estimation method at six agricultural flux tower sites in the midwestern US using spaced-based SIF retrievals. Our Vcmax estimates agree well with literature values for corn and soybean plants (average values of 37 and 101 μmol m(-2)  s(-1) , respectively) and show plausible seasonal patterns. The effect of the updated seasonally varying Vcmax parameterization on simulated gross primary productivity (GPP) is tested by comparing to simulations with fixed Vcmax values. Validation against flux tower observations demonstrate that simulations of GPP and light use efficiency improve significantly when our time-resolved Vcmax estimates from SIF are used, with R(2) for GPP comparisons increasing from 0.85 to 0.93, and for light use efficiency from 0.44 to 0.83. Our results support the use of space-based SIF data as a proxy for photosynthetic capacity and suggest the potential for global, time-resolved estimates of Vcmax . © 2014 John Wiley & Sons Ltd.

  6. The response of the SSM/I to the marine environment. Part 2: A parameterization of the effect of the sea surface slope distribution on emission and reflection

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Katsaros, Kristina B.

    1994-01-01

    Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.

  7. LEOrbit: A program to calculate parameters relevant to modeling Low Earth Orbit spacecraft-plasma interaction

    NASA Astrophysics Data System (ADS)

    Marchand, R.; Purschke, D.; Samson, J.

    2013-03-01

    Understanding the physics of interaction between satellites and the space environment is essential in planning and exploiting space missions. Several computer models have been developed over the years to study this interaction. In all cases, simulations are carried out in the reference frame of the spacecraft and effects such as charging, the formation of electrostatic sheaths and wakes are calculated for given conditions of the space environment. In this paper we present a program used to compute magnetic fields and a number of space plasma and space environment parameters relevant to Low Earth Orbits (LEO) spacecraft-plasma interaction modeling. Magnetic fields are obtained from the International Geophysical Reference Field (IGRF) and plasma parameters are obtained from the International Reference Ionosphere (IRI) model. All parameters are computed in the spacecraft frame of reference as a function of its six Keplerian elements. They are presented in a format that can be used directly in most spacecraft-plasma interaction models. Catalogue identifier: AENY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 270308 No. of bytes in distributed program, including test data, etc.: 2323222 Distribution format: tar.gz Programming language: FORTRAN 90. Computer: Non specific. Operating system: Non specific. RAM: 7.1 MB Classification: 19, 4.14. External routines: IRI, IGRF (included in the package). Nature of problem: Compute magnetic field components, direction of the sun, sun visibility factor and approximate plasma parameters in the reference frame of a Low Earth Orbit satellite. Solution method: Orbit integration, calls to IGRF and IRI libraries and transformation of coordinates from geocentric to spacecraft frame reference. Restrictions: Low Earth orbits, altitudes between 150 and 2000 km. Running time: Approximately two seconds to parameterize a full orbit with 1000 points.

  8. Nonequilibrium Phase Transitions in Supercooled Water

    NASA Astrophysics Data System (ADS)

    Limmer, David; Chandler, David

    2012-02-01

    We present results of a simulation study of water driven out of equilibrium. Using transition path sampling, we can probe stationary path distributions parameterize by order parameters that are extensive in space and time. We find that by coupling external fields to these parameters, we can drive water through a first order dynamical phase transition into amorphous ice. By varying the initial equilibrium distributions we can probe pathways for the creation of amorphous ices of low and high densities.

  9. Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.

    PubMed

    Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus

    2008-08-05

    A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.

  10. Modeling of Thermospheric Neutral Density Variations in Response to Geomagnetic Forcing using GRACE Accelerometer Data

    NASA Astrophysics Data System (ADS)

    Calabia, A.; Matsuo, T.; Jin, S.

    2017-12-01

    The upper atmospheric expansion refers to an increase in the temperature and density of Earth's thermosphere due to increased geomagnetic and space weather activities, producing anomalous atmospheric drag on LEO spacecraft. Increased drag decelerates satellites, moving their orbit closer to Earth, decreasing the lifespan of satellites, and making satellite orbit determination difficult. In this study, thermospheric neutral density variations due to geomagnetic forcing are investigated from 10 years (2003-2013) of GRACE's accelerometer-based estimates. In order to isolate the variations produced by geomagnetic forcing, 99.8% of the total variability has been modeled and removed through the parameterization of annual, LST, and solar-flux variations included in the primary Empirical Orthogonal Functions. The residual disturbances of neutral density variations have been investigated further in order to unravel their relationship to several geomagnetic indices and space weather activity indicators. Stronger fluctuations have been found in the southern polar cap, following the dipole-tilt angle variations. While the parameterization of the residual disturbances in terms of Dst index results in the best fit to training data, the use of merging electric field as a predictor leads to the best forecasting performance. An important finding is that modeling of neutral density variations in response geomagnetic forcing can be improved by accounting for the latitude-dependent delay. Our data-driven modeling results are further compared to modeling with TIEGCM.

  11. Kalman filter parameter estimation for a nonlinear diffusion model of epithelial cell migration using stochastic collocation and the Karhunen-Loeve expansion.

    PubMed

    Barber, Jared; Tanase, Roxana; Yotov, Ivan

    2016-06-01

    Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  13. Global Measurements of Stratospheric Mountain Waves from Space

    NASA Technical Reports Server (NTRS)

    Eckermann, Stephen D.; Preusse, Peter; Jackman, Charles H. (Technical Monitor)

    1999-01-01

    Temperatures acquired by the Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) during shuttle mission STS-66 have provided measurements of stratospheric mountain waves from space. Large-amplitude, long-wavelength mountain waves at heights of 15 to 30 kilometers above the southern Andes Mountains were observed and characterized, with vigorous wave breaking inferred above 30 kilometers. Mountain waves also occurred throughout the stratosphere (15 to 45 kilometers) over a broad mountainous region of central Eurasia. The global distribution of mountain wave activity accords well with predictions from a mountain wave model. The findings demonstrate that satellites can provide the global data needed to improve mountain wave parameterizations and hence global climate and forecast models.

  14. Validation of whitecap fraction and breaking wave parameters from WAVEWATCH-III using in situ and remote-sensing data

    NASA Astrophysics Data System (ADS)

    Leckler, F.; Hanafin, J. A.; Ardhuin, F.; Filipot, J.; Anguelova, M. D.; Moat, B. I.; Yelland, M.; Prytherch, J.

    2012-12-01

    Whitecaps are the main sink of wave energy. Although the exact processes are still unknown, it is clear that they play a significant role in momentum exchange between atmosphere and ocean, and also influence gas and aerosol exchange. Recently, modeling of whitecap properties was implemented in the spectral wave model WAVEWATCH-III ®. This modeling takes place in the context of the Oceanflux-Greenhouse Gas project, to provide a climatology of breaking waves for gas transfer studies. We present here a validation study for two different wave breaking parameterizations implemented in the spectral wave model WAVEWATCH-III ®. The model parameterizations use different approaches related to the steepness of the carrying waves to estimate breaking wave probabilities. That of Ardhuin et al. (2010) is based on the hypothesis that breaking probabilities become significant when the saturation spectrum exceeds a threshold, and includes a modification to allow for greater breaking in the mean wave direction, to agree with observations. It also includes suppression of shorter waves by longer breaking waves. In the second, (Filipot and Ardhuin, 2012) breaking probabilities are defined at different scales using wave steepness, then the breaking wave height distribution is integrated over all scales. We also propose an adaptation of the latter to make it self-consistent. The breaking probabilities parameterized by Filipot and Ardhuin (2012) are much larger for dominant waves than those from the other parameterization, and show better agreement with modeled statistics of breaking crest lengths measured during the FAIRS experiment. This stronger breaking also has an impact on the shorter waves due to the parameterization of short wave damping associated with large breakers, and results in a different distribution of the breaking crest lengths. Converted to whitecap coverage using Reul and Chapron (2003), both parameterizations agree reasonably well with commonly-used empirical fits of whitecap coverage against wind speed (Monahan and Woolf, 1989) and with the global whitecap coverage of Anguelova and Webster (2006), derived from space-borne radiometry. This is mainly due to the fact that the breaking of larger waves in the parametrization by Filipot and Ardhuin (2012) is compensated for by the intense breaking of smaller waves in that of Ardhuin et al. (2010). Comparison with in situ data collected during research ship cruises in the North and South Atlantic (SEASAW, DOGEE and WAGES), and the Norwegian Sea (HiWASE) between 2006 and 2011 also shows good agreement. However, as large scale breakers produce a thicker foam layer, modeled mean foam thickness clearly depends on the scale of the breakers. Foam thickness is thus a more interesting parameter for calibrating and validating breaking wave parameterizations, as the differences in scale can be determined. With this in mind, we present the initial results of validation using an estimation of mean foam thickness using multiple radiometric bands from satellites SMOS and AMSR-E.

  15. Degeneration of Bethe subalgebras in the Yangian of gl_n

    NASA Astrophysics Data System (ADS)

    Ilin, Aleksei; Rybnikov, Leonid

    2018-04-01

    We study degenerations of Bethe subalgebras B( C) in the Yangian Y(gl_n), where C is a regular diagonal matrix. We show that closure of the parameter space of the family of Bethe subalgebras, which parameterizes all possible degenerations, is the Deligne-Mumford moduli space of stable rational curves \\overline{M_{0,n+2}}. All subalgebras corresponding to the points of \\overline{M_{0,n+2}} are free and maximal commutative. We describe explicitly the "simplest" degenerations and show that every degeneration is the composition of the simplest ones. The Deligne-Mumford space \\overline{M_{0,n+2}} generalizes to other root systems as some De Concini-Procesi resolution of some toric variety. We state a conjecture generalizing our results to Bethe subalgebras in the Yangian of arbitrary simple Lie algebra in terms of this De Concini-Procesi resolution.

  16. Integrated approach to estimate the ocean's time variable dynamic topography including its covariance matrix

    NASA Astrophysics Data System (ADS)

    Müller, Silvia; Brockmann, Jan Martin; Schuh, Wolf-Dieter

    2015-04-01

    The ocean's dynamic topography as the difference between the sea surface and the geoid reflects many characteristics of the general ocean circulation. Consequently, it provides valuable information for evaluating or tuning ocean circulation models. The sea surface is directly observed by satellite radar altimetry while the geoid cannot be observed directly. The satellite-based gravity field determination requires different measurement principles (satellite-to-satellite tracking (e.g. GRACE), satellite-gravity-gradiometry (GOCE)). In addition, hydrographic measurements (salinity, temperature and pressure; near-surface velocities) provide information on the dynamic topography. The observation types have different representations and spatial as well as temporal resolutions. Therefore, the determination of the dynamic topography is not straightforward. Furthermore, the integration of the dynamic topography into ocean circulation models requires not only the dynamic topography itself but also its inverse covariance matrix on the ocean model grid. We developed a rigorous combination method in which the dynamic topography is parameterized in space as well as in time. The altimetric sea surface heights are expressed as a sum of geoid heights represented in terms of spherical harmonics and the dynamic topography parameterized by a finite element method which can be directly related to the particular ocean model grid. Besides the difficult task of combining altimetry data with a gravity field model, a major aspect is the consistent combination of satellite data and in-situ observations. The particular characteristics and the signal content of the different observations must be adequately considered requiring the introduction of auxiliary parameters. Within our model the individual observation groups are combined in terms of normal equations considering their full covariance information; i.e. a rigorous variance/covariance propagation from the original measurements to the final product is accomplished. In conclusion, the developed integrated approach allows for estimating the dynamic topography and its inverse covariance matrix on arbitrary grids in space and time. The inverse covariance matrix contains the appropriate weights for model-data misfits in least-squares ocean model inversions. The focus of this study is on the North Atlantic Ocean. We will present the conceptual design and dynamic topography estimates based on time variable data from seven satellite altimeter missions (Jason-1, Jason-2, Topex/Poseidon, Envisat, ERS-2, GFO, Cryosat2) in combination with the latest GOCE gravity field model and in-situ data from the Argo floats and near-surface drifting buoys.

  17. Simulation of the Summer Monsoon Rainfall over East Asia using the NCEP GFS Cumulus Parameterization at Different Horizontal Resolutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Kyo-Sun; Hong, Song You; Yoon, Jin-Ho

    2014-10-01

    The most recent version of Simplified Arakawa-Schubert (SAS) cumulus scheme in National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) (GFS SAS) has been implemented into the Weather and Research Forecasting (WRF) model with a modification of triggering condition and convective mass flux to become depending on model’s horizontal grid spacing. East Asian Summer Monsoon of 2006 from June to August is selected to evaluate the performance of the modified GFS SAS scheme. Simulated monsoon rainfall with the modified GFS SAS scheme shows better agreement with observation compared to the original GFS SAS scheme. The original GFS SAS schememore » simulates the similar ratio of subgrid-scale precipitation, which is calculated from a cumulus scheme, against total precipitation regardless of model’s horizontal grid spacing. This is counter-intuitive because the portion of resolved clouds in a grid box should be increased as the model grid spacing decreases. This counter-intuitive behavior of the original GFS SAS scheme is alleviated by the modified GFS SAS scheme. Further, three different cumulus schemes (Grell and Freitas, Kain and Fritsch, and Betts-Miller-Janjic) are chosen to investigate the role of a horizontal resolution on simulated monsoon rainfall. The performance of high-resolution modeling is not always enhanced as the spatial resolution becomes higher. Even though improvement of probability density function of rain rate and long wave fluxes by the higher-resolution simulation is robust regardless of a choice of cumulus parameterization scheme, the overall skill score of surface rainfall is not monotonically increasing with spatial resolution.« less

  18. Planck 2015 results. XIV. Dark energy and modified gravity

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Heavens, A.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marchini, A.; Maris, M.; Martin, P. G.; Martinelli, M.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Narimani, A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Salvatelli, V.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Schaefer, B. M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Viel, M.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forced to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. When testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.

  19. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    PubMed

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.

  20. Adaptive Aft Signature Shaping of a Low-Boom Supersonic Aircraft Using Off-Body Pressures

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu

    2012-01-01

    The design and optimization of a low-boom supersonic aircraft using the state-of-the- art o -body aerodynamics and sonic boom analysis has long been a challenging problem. The focus of this paper is to demonstrate an e ective geometry parameterization scheme and a numerical optimization approach for the aft shaping of a low-boom supersonic aircraft using o -body pressure calculations. A gradient-based numerical optimization algorithm that models the objective and constraints as response surface equations is used to drive the aft ground signature toward a ramp shape. The design objective is the minimization of the variation between the ground signature and the target signature subject to several geometric and signature constraints. The target signature is computed by using a least-squares regression of the aft portion of the ground signature. The parameterization and the deformation of the geometry is performed with a NASA in- house shaping tool. The optimization algorithm uses the shaping tool to drive the geometric deformation of a horizontal tail with a parameterization scheme that consists of seven camber design variables and an additional design variable that describes the spanwise location of the midspan section. The demonstration cases show that numerical optimization using the state-of-the-art o -body aerodynamic calculations is not only feasible and repeatable but also allows the exploration of complex design spaces for which a knowledge-based design method becomes less effective.

  1. Short‐term time step convergence in a climate model

    PubMed Central

    Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane

    2015-01-01

    Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669

  2. Demonstration of Effects on Tropical Cyclone Forecasts with a High Resolution Global Model from Variation in Cumulus Convection Parameterization

    NASA Technical Reports Server (NTRS)

    Miller, Timothy L.; Robertson, Franklin R.; Cohen, Charles; Mackaro, Jessica

    2009-01-01

    The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models that have been developed at Goddard Space Flight Center to support NASA's earth science research in data analysis, observing system modeling and design, climate and weather prediction, and basic research. The work presented used GEOS-5 with 0.25o horizontal resolution and 72 vertical levels (up to 0.01 hP) resolving both the troposphere and stratosphere, with closer packing of the levels close to the surface. The model includes explicit (grid-scale) moist physics, as well as convective parameterization schemes. Results will be presented that will demonstrate strong dependence in the results of modeling of a strong hurricane on the type of convective parameterization scheme used. The previous standard (default) option in the model was the Relaxed Arakawa-Schubert (RAS) scheme, which uses a quasi-equilibrium closure. In the cases shown, this scheme does not permit the efficient development of a strong storm in comparison with observations. When this scheme is replaced by a modified version of the Kain-Fritsch scheme, which was originally developed for use on grids with intervals of order 25 km such as the present one, the storm is able to develop to a much greater extent, closer to that of reality. Details of the two cases will be shown in order to elucidate the differences in the two modeled storms.

  3. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  4. Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hansen, S. E.; Papadopoulos, G. A.

    2017-12-01

    The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.

  5. Nonrotating Convective Self-Aggregation in a Limited Area AGCM

    NASA Astrophysics Data System (ADS)

    Arnold, Nathan P.; Putman, William M.

    2018-04-01

    We present nonrotating simulations with the Goddard Earth Observing System (GEOS) atmospheric general circulation model (AGCM) in a square limited area domain over uniform sea surface temperature. As in previous studies, convection spontaneously aggregates into humid clusters, driven by a combination of radiative and moisture-convective feedbacks. The aggregation is qualitatively independent of resolution, with horizontal grid spacing from 3 to 110 km, with both explicit and parameterized deep convection. A budget for the spatial variance of column moist static energy suggests that longwave radiative and surface flux feedbacks help establish aggregation, while the shortwave feedback contributes to its maintenance. Mechanism-denial experiments confirm that aggregation does not occur without interactive longwave radiation. Ice cloud radiative effects help support the humid convecting regions but are not essential for aggregation, while liquid clouds have a negligible effect. Removing the dependence of parameterized convection on tropospheric humidity reduces the intensity of aggregation but does not prevent the formation of dry regions. In domain sizes less than (5,000 km)2, the aggregation forms a single cluster, while larger domains develop multiple clusters. Larger domains initialized with a single large cluster are unable to maintain them, suggesting an upper size limit. Surface wind speed increases with domain size, implying that maintenance of the boundary layer winds may limit cluster size. As cluster size increases, large boundary layer temperature anomalies develop to maintain the surface pressure gradient, leading to an increase in the depth of parameterized convective heating and an increase in gross moist stability.

  6. Dynamic Biological Functioning Important for Simulating and Stabilizing Ocean Biogeochemistry

    NASA Astrophysics Data System (ADS)

    Buchanan, P. J.; Matear, R. J.; Chase, Z.; Phipps, S. J.; Bindoff, N. L.

    2018-04-01

    The biogeochemistry of the ocean exerts a strong influence on the climate by modulating atmospheric greenhouse gases. In turn, ocean biogeochemistry depends on numerous physical and biological processes that change over space and time. Accurately simulating these processes is fundamental for accurately simulating the ocean's role within the climate. However, our simulation of these processes is often simplistic, despite a growing understanding of underlying biological dynamics. Here we explore how new parameterizations of biological processes affect simulated biogeochemical properties in a global ocean model. We combine 6 different physical realizations with 6 different biogeochemical parameterizations (36 unique ocean states). The biogeochemical parameterizations, all previously published, aim to more accurately represent the response of ocean biology to changing physical conditions. We make three major findings. First, oxygen, carbon, alkalinity, and phosphate fields are more sensitive to changes in the ocean's physical state. Only nitrate is more sensitive to changes in biological processes, and we suggest that assessment protocols for ocean biogeochemical models formally include the marine nitrogen cycle to assess their performance. Second, we show that dynamic variations in the production, remineralization, and stoichiometry of organic matter in response to changing environmental conditions benefit the simulation of ocean biogeochemistry. Third, dynamic biological functioning reduces the sensitivity of biogeochemical properties to physical change. Carbon and nitrogen inventories were 50% and 20% less sensitive to physical changes, respectively, in simulations that incorporated dynamic biological functioning. These results highlight the importance of a dynamic biology for ocean properties and climate.

  7. Nucleon-Nucleon Total Cross Section

    NASA Technical Reports Server (NTRS)

    Norbury, John W.

    2008-01-01

    The total proton-proton and neutron-proton cross sections currently used in the transport code HZETRN show significant disagreement with experiment in the GeV and EeV energy ranges. The GeV range is near the region of maximum cosmic ray intensity. It is therefore important to correct these cross sections, so that predictions of space radiation environments will be accurate. Parameterizations of nucleon-nucleon total cross sections are developed which are accurate over the entire energy range of the cosmic ray spectrum.

  8. Explicitly Stochastic Parameterization of Nonorographic Gravity-Wave Drag

    DTIC Science & Technology

    2010-01-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Research Laboratory,Space Science Division,4555 Overlook Avenue SW,Washington,DC,20375 8. PERFORMING... ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...τb exp [ − (c− coff ) 2 c2w ] , (1) τb = τ ∗ b F (φ, t), (2) with a phase-speed width cw = 30 m s −1. τb is the “background” momentum flux and is

  9. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    NASA Astrophysics Data System (ADS)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  10. Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions

    NASA Astrophysics Data System (ADS)

    Nelson, K.; Mechem, D. B.

    2014-12-01

    Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.

  11. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  12. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  13. Automated Simplification of Full Chemical Mechanisms

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    1997-01-01

    A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.

  14. Assessment of the Weather Research and Forecasting (WRF) model for simulation of extreme rainfall events in the upper Ganga Basin

    NASA Astrophysics Data System (ADS)

    Chawla, Ila; Osuri, Krishna K.; Mujumdar, Pradeep P.; Niyogi, Dev

    2018-02-01

    Reliable estimates of extreme rainfall events are necessary for an accurate prediction of floods. Most of the global rainfall products are available at a coarse resolution, rendering them less desirable for extreme rainfall analysis. Therefore, regional mesoscale models such as the advanced research version of the Weather Research and Forecasting (WRF) model are often used to provide rainfall estimates at fine grid spacing. Modelling heavy rainfall events is an enduring challenge, as such events depend on multi-scale interactions, and the model configurations such as grid spacing, physical parameterization and initialization. With this background, the WRF model is implemented in this study to investigate the impact of different processes on extreme rainfall simulation, by considering a representative event that occurred during 15-18 June 2013 over the Ganga Basin in India, which is located at the foothills of the Himalayas. This event is simulated with ensembles involving four different microphysics (MP), two cumulus (CU) parameterizations, two planetary boundary layers (PBLs) and two land surface physics options, as well as different resolutions (grid spacing) within the WRF model. The simulated rainfall is evaluated against the observations from 18 rain gauges and the Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis (TMPA) 3B42RT version 7 data. From the analysis, it should be noted that the choice of MP scheme influences the spatial pattern of rainfall, while the choice of PBL and CU parameterizations influences the magnitude of rainfall in the model simulations. Further, the WRF run with Goddard MP, Mellor-Yamada-Janjic PBL and Betts-Miller-Janjic CU scheme is found to perform best in simulating this heavy rain event. The selected configuration is evaluated for several heavy to extremely heavy rainfall events that occurred across different months of the monsoon season in the region. The model performance improved through incorporation of detailed land surface processes involving prognostic soil moisture evolution in Noah scheme compared to the simple Slab model. To analyse the effect of model grid spacing, two sets of downscaling ratios - (i) 1 : 3, global to regional (G2R) scale and (ii) 1 : 9, global to convection-permitting scale (G2C) - are employed. Results indicate that a higher downscaling ratio (G2C) causes higher variability and consequently large errors in the simulations. Therefore, G2R is adopted as a suitable choice for simulating heavy rainfall event in the present case study. Further, the WRF-simulated rainfall is found to exhibit less bias when compared with the NCEP FiNaL (FNL) reanalysis data.

  15. Finite frequency shear wave splitting tomography: a model space search approach

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  16. Enceladus-Mimas paradox: a result of different early evolutions of satellites?

    NASA Astrophysics Data System (ADS)

    Czechowski, Leszek; Witek, Piotr

    2015-04-01

    Summary: Thermal history of Mimas and Enceladus is investigated from the beginning of accretion to 400 Myr. The following heat sources are included: short lived and long lived radioactive isotopes, accretion, serpentinization, and phase changes. We find that temperature of Mimas' interior was significantly lower than of Enceladus. Comparison of thermal models of Mimas and Enceladus indicates that conditions favorable for starting tidal heating lasted for short time (~107yr) in Mimas and for ~108 yr in Enceladus. This could explain Mimas-Enceladus paradox. 1. Numerical model: In our calculations we use numerical model developed by Czechowski (2012) (see e.g. description in [1]). The model is based on parameterized theory of convection combined with 1-dimensional equation of the heat transfer in spherical coordinates: δT(r,t)- ρcp δt = div(k(r,T ) gradT (r,t))+ Q(r,T), where r is the radial distance (spherical coordinate), ρ is the density [kg m-3], cp [J kg1 K-1 ] is the specific heat, Q [W kg-1] is the heating rate, and k[W m-1 K-1] is the thermal conductivity. Q(r,t) includes sources and sinks of the heat. The equation is solved in time dependent region [0, R(t)]. During accretion the radius R(t) increases in time according to formula: R(t) = atfor tini tac , i.e. after the accretion (see e.g. [2]), where tinidenotes beginning of accretion and tac denotes duration of this process. If the Rayleigh number in the considered layer exceeds its critical value Racr then convection starts. It leads to effective heat transfer. The full description of convection is given by a velocity field and temperature distribution. However, we are interested in convection as a process of heat transport only. For solid state convection (SSC) heat transport can be described by dimensionless Nusselt number Nu. We use the following definition of the Nu: Nu= (True total surface heat flow)/(Total heat flow without convection). The heat transport by SSC is modelled simply by multiplying the coefficient of the heat conduction in the considered layer, i.e.: kconv =Nu k. This approach is used successfully in parameterized theory of convection for SSC in the Earth and other planets (e.g. [3], [4]). Parameterization of liquid state convection (LSC) is even simpler. Ra in molten region is very high (usually higher than 1016). The LSC could be very intensive resulting in almost adiabatic temperature gradient given by: dT-= gαmT-, dr cpm where αm and cpm are thermal expansion coefficient and specific heat in molten region, g is the local gravity. In Enceladus and Mimas the adiabatic gradient is low and therefore LSC region is almost isothermal. 2. Results: Comparison of thermal models of Mimas and Enceladus indicates that conditions favorable for starting tidal heating (interior hot enough) lasted for short time (~107yr) in Mimas and for ~108 yr in Enceladus. This could explain Mimas-Enceladus paradox. 3. Conclusions: The Mimas-Enceladus paradox is probably the result of short time when Mimas was hot enough to allow for substantial tidal heating. The Mimas-Tethys resonance formed later when Mimas was already cool. (see also [1, 4]) The full text of the paper will be published in Acta Geophysica [5]. Acknowledgements: The research is partly supported by National Science Centre (grant 2011/ 01/ B/ ST10/06653). References : [1] Czechowski, L. (2014) Some remarks on the early evolution of Enceladus. Planet. Sp. Sc. 104, 185-199. [2] Merk, R., Breuer, D., Spohn, T. (2002). Numerical modeling of 26Al induced radioactive melting of asteroids concerning accretion. Icarus 199, 183-191. [3] Sharpe, H.N., Peltier, W.R., (1978) Parameterized mantle convection and the Earth's thermal history. Geophys. Res. Lett. 5, 737-740. [4] Czechowski, L. (2006) Parameterized model of convection driven by tidal and radiogenic heating. Adv. Space Res. 38, 788-793. [5] Czechowski, L., Witek, P. (2015) Comparisons of early evolutions of Mimas and Enceladus. Submitted to Acta Geophysica.

  17. On the estimation of stellar parameters with uncertainty prediction from Generative Artificial Neural Networks: application to Gaia RVS simulated spectra

    NASA Astrophysics Data System (ADS)

    Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.

    2016-10-01

    Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.

  18. Precipitation characteristics of CAM5 physics at mesoscale resolution during MC3E and the impact of convective timescale choice

    DOE PAGES

    Gustafson, William I.; Ma, Po-Lun; Singh, Balwinder

    2014-12-17

    The physics suite of the Community Atmosphere Model version 5 (CAM5) has recently been implemented in the Weather Research and Forecasting (WRF) model to explore the behavior of the parameterization suite at high resolution and in the more controlled setting of a limited area model. The initial paper documenting this capability characterized the behavior for northern high latitude conditions. This present paper characterizes the precipitation characteristics for continental, mid-latitude, springtime conditions during the Midlatitude Continental Convective Clouds Experiment (MC3E) over the central United States. This period exhibited a range of convective conditions from those driven strongly by large-scale synoptic regimesmore » to more locally driven convection. The study focuses on the precipitation behavior at 32 km grid spacing to better anticipate how the physics will behave in the global model when used at similar grid spacing in the coming years. Importantly, one change to the Zhang-McFarlane deep convective parameterization when implemented in WRF was to make the convective timescale parameter an explicit function of grid spacing. This study examines the sensitivity of the precipitation to the default value of the convective timescale in WRF, which is 600 seconds for 32 km grid spacing, to the value of 3600 seconds used for 2 degree grid spacing in CAM5. For comparison, an infinite convective timescale is also used. The results show that the 600 second timescale gives the most accurate precipitation over the central United States in terms of rain amount. However, this setting has the worst precipitation diurnal cycle, with the convection too tightly linked to the daytime surface heating. Longer timescales greatly improve the diurnal cycle but result in less precipitation and produce a low bias. An analysis of rain rates shows the accurate precipitation amount with the shorter timescale is assembled from an over abundance of drizzle combined with too little heavy rain events. With longer timescales one can improve the distribution, particularly for the extreme rain rates. Ultimately, without changing other aspects of the physics, one must choose between accurate diurnal timing and rain amount when choosing an appropriate convective timescale.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Ma, Po-Lun; Singh, Balwinder

    The physics suite of the Community Atmosphere Model version 5 (CAM5) has recently been implemented in the Weather Research and Forecasting (WRF) model to explore the behavior of the parameterization suite at high resolution and in the more controlled setting of a limited area model. The initial paper documenting this capability characterized the behavior for northern high latitude conditions. This present paper characterizes the precipitation characteristics for continental, mid-latitude, springtime conditions during the Midlatitude Continental Convective Clouds Experiment (MC3E) over the central United States. This period exhibited a range of convective conditions from those driven strongly by large-scale synoptic regimesmore » to more locally driven convection. The study focuses on the precipitation behavior at 32 km grid spacing to better anticipate how the physics will behave in the global model when used at similar grid spacing in the coming years. Importantly, one change to the Zhang-McFarlane deep convective parameterization when implemented in WRF was to make the convective timescale parameter an explicit function of grid spacing. This study examines the sensitivity of the precipitation to the default value of the convective timescale in WRF, which is 600 seconds for 32 km grid spacing, to the value of 3600 seconds used for 2 degree grid spacing in CAM5. For comparison, an infinite convective timescale is also used. The results show that the 600 second timescale gives the most accurate precipitation over the central United States in terms of rain amount. However, this setting has the worst precipitation diurnal cycle, with the convection too tightly linked to the daytime surface heating. Longer timescales greatly improve the diurnal cycle but result in less precipitation and produce a low bias. An analysis of rain rates shows the accurate precipitation amount with the shorter timescale is assembled from an over abundance of drizzle combined with too little heavy rain events. With longer timescales one can improve the distribution, particularly for the extreme rain rates. Ultimately, without changing other aspects of the physics, one must choose between accurate diurnal timing and rain amount when choosing an appropriate convective timescale.« less

  20. The Berkeley Out-of-Order Machine (BOOM): An Industry-Competitive, Synthesizable, Parameterized RISC-V Processor

    DTIC Science & Technology

    2015-06-13

    The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor

  1. Designing manufacturable filters for a 16-band plenoptic camera using differential evolution

    NASA Astrophysics Data System (ADS)

    Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert

    2017-05-01

    A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.

  2. Digital data processing system dynamic loading analysis

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Tucker, A. E.

    1976-01-01

    Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.

  3. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  4. Constraints on interacting dark energy models from Planck 2015 and redshift-space distortion data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, André A.; Abdalla, E.; Xu, Xiao-Dong

    2017-01-01

    We investigate phenomenological interactions between dark matter and dark energy and constrain these models by employing the most recent cosmological data including the cosmic microwave background radiation anisotropies from Planck 2015, Type Ia supernovae, baryon acoustic oscillations, the Hubble constant and redshift-space distortions. We find that the interaction in the dark sector parameterized as an energy transfer from dark matter to dark energy is strongly suppressed by the whole updated cosmological data. On the other hand, an interaction between dark sectors with the energy flow from dark energy to dark matter is proved in better agreement with the available cosmologicalmore » observations. This coupling between dark sectors is needed to alleviate the coincidence problem.« less

  5. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Scientific investigations planned for the Lidar in-Space Technology Experiment (LITE)

    NASA Technical Reports Server (NTRS)

    Mccormick, M. P.; Winker, D. M.; Browell, E. V.; Coakley, J. A.; Gardner, C. S.; Hoff, R. M.; Kent, G. S.; Melfi, S. H.; Menzies, R. T.; Platt, C. M. R.

    1993-01-01

    The Lidar In-Space Technology Experiment (LITE) is being developed by NASA/Langley Research Center for a series of flights on the space shuttle beginning in 1994. Employing a three-wavelength Nd:YAG laser and a 1-m-diameter telescope, the system is a test-bed for the development of technology required for future operational spaceborne lidars. The system has been designed to observe clouds, tropospheric and stratospheric aerosols, characteristics of the planetary boundary layer, and stratospheric density and temperature perturbations with much greater resolution than is available from current orbiting sensors. In addition to providing unique datasets on these phenomena, the data obtained will be useful in improving retrieval algorithms currently in use. Observations of clouds and the planetary boundary layer will aid in the development of global climate model (GCM) parameterizations. This article briefly describes the LITE program and discusses the types of scientific investigations planned for the first flight.

  7. Mars Radiation Surface Model

    NASA Astrophysics Data System (ADS)

    Alzate, N.; Grande, M.; Matthiae, D.

    2017-09-01

    Planetary Space Weather Services (PSWS) within the Europlanet H2020 Research Infrastructure have been developed following protocols and standards available in Astrophysical, Solar Physics and Planetary Science Virtual Observatories. Several VO-compliant functionalities have been implemented in various tools. The PSWS extends the concepts of space weather and space situational awareness to other planets in our Solar System and in particular to spacecraft that voyage through it. One of the five toolkits developed as part of these services is a model dedicated to the Mars environment. This model has been developed at Aberystwyth University and the Institut fur Luft- und Raumfahrtmedizin (DLR Cologne) using modeled average conditions available from Planetocosmics. It is available for tracing propagation of solar events through the Solar System and modeling the response of the Mars environment. The results have been synthesized into look-up tables parameterized to variable solar wind conditions at Mars.

  8. Evaluation of different parameterizations of the spatial heterogeneity of subsurface storage capacity for hourly runoff simulation in boreal mountainous watershed

    NASA Astrophysics Data System (ADS)

    Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur

    2015-03-01

    Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.

  9. Comprehensive assessment of parameterization methods for estimating clear-sky surface downward longwave radiation

    NASA Astrophysics Data System (ADS)

    Guo, Yamin; Cheng, Jie; Liang, Shunlin

    2018-02-01

    Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.

  10. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  11. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  12. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  13. A CPT for Improving Turbulence and Cloud Processes in the NCEP Global Models

    NASA Astrophysics Data System (ADS)

    Krueger, S. K.; Moorthi, S.; Randall, D. A.; Pincus, R.; Bogenschutz, P.; Belochitski, A.; Chikira, M.; Dazlich, D. A.; Swales, D. J.; Thakur, P. K.; Yang, F.; Cheng, A.

    2016-12-01

    Our Climate Process Team (CPT) is based on the premise that the NCEP (National Centers for Environmental Prediction) global models can be improved by installing an integrated, self-consistent description of turbulence, clouds, deep convection, and the interactions between clouds and radiative and microphysical processes. The goal of our CPT is to unify the representation of turbulence and subgrid-scale (SGS) cloud processes and to unify the representation of SGS deep convective precipitation and grid-scale precipitation as the horizontal resolution decreases. We aim to improve the representation of small-scale phenomena by implementing a PDF-based SGS turbulence and cloudiness scheme that replaces the boundary layer turbulence scheme, the shallow convection scheme, and the cloud fraction schemes in the GFS (Global Forecast System) and CFS (Climate Forecast System) global models. We intend to improve the treatment of deep convection by introducing a unified parameterization that scales continuously between the simulation of individual clouds when and where the grid spacing is sufficiently fine and the behavior of a conventional parameterization of deep convection when and where the grid spacing is coarse. We will endeavor to improve the representation of the interactions of clouds, radiation, and microphysics in the GFS/CFS by using the additional information provided by the PDF-based SGS cloud scheme. The team is evaluating the impacts of the model upgrades with metrics used by the NCEP short-range and seasonal forecast operations.

  14. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  15. Changes in organic aerosol composition with aging inferred from aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    Ng, N. L.; Canagaratna, M. R.; Jimenez, J. L.; Chhabra, P. S.; Seinfeld, J. H.; Worsnop, D. R.

    2011-03-01

    Organic aerosols (OA) can be separated with factor analysis of aerosol mass spectrometer (AMS) data into hydrocarbon-like OA (HOA) and oxygenated OA (OOA). We develop a new method to parameterize H:C of OOA in terms of f43 (ratio of m/z 43, mostly C2H3O+, to total signal in the component mass spectrum). Such parameterization allows the transformation of large database of ambient OOA components from the f44 (mostly CO2+, likely from acid groups) vs. f43 space ("triangle plot") (Ng et al., 2010) into the Van Krevelen diagram (H:C vs. O:C). Heald et al. (2010) suggested that the bulk composition of OA line up in the Van Krevelen diagram with a slope ~ -1; such slope can potentially arise from the physical mixing of HOA and OOA, and/or from chemical aging of these components. In this study, we find that the OOA components from all sites occupy an area in the Van Krevelen space, with the evolution of OOA following a shallower slope of ~ -0.5, consistent with the additions of both acid and alcohol functional groups without fragmentation, and/or the addition of acid groups with C-C bond breakage. The importance of acid formation in OOA evolution is consistent with increasing f44 in the triangle plot with photochemical age. These results provide a framework for linking the bulk aerosol chemical composition evolution to molecular-level studies.

  16. Approaching Pharmacological Space: Events and Components.

    PubMed

    Vistoli, Giulio; Pedretti, Alessandro; Mazzolari, Angelica; Testa, Bernard

    2018-01-01

    With a view to introducing the concept of pharmacological space and its potential applications in investigating and predicting the toxic mechanisms of xenobiotics, this opening chapter describes the logical relations between conformational behavior, physicochemical properties and binding spaces, which are seen as the three key elements composing the pharmacological space. While the concept of conformational space is routinely used to encode molecular flexibility, the concepts of property spaces and, particularly, of binding spaces are more innovative. Indeed, their descriptors can find fruitful applications (a) in describing the dynamic adaptability a given ligand experiences when inserted into a specific environment, and (b) in parameterizing the flexibility a ligand retains when bound to a biological target. Overall, these descriptors can conveniently account for the often disregarded entropic factors and as such they prove successful when inserted in ligand- or structure-based predictive models. Notably, and although binding space parameters can clearly be derived from MD simulations, the chapter will illustrate how docking calculations, despite their static nature, are able to evaluate ligand's flexibility by analyzing several poses for each ligand. Such an approach, which represents the founding core of the binding space concept, can find various applications in which the related descriptors show an impressive enhancing effect on the statistical performances of the resulting predictive models.

  17. Twofold symmetries of the pure gravity action

    DOE PAGES

    Cheung, Clifford; Remmen, Grant N.

    2017-01-25

    Here, we recast the action of pure gravity into a form that is invariant under a twofold Lorentz symmetry. To derive this representation, we construct a general parameterization of all theories equivalent to the Einstein-Hilbert action up to a local field redefinition and gauge fixing. We then exploit this freedom to eliminate all interactions except those exhibiting two sets of independently contracted Lorentz indices. The resulting action is local, remarkably simple, and naturally expressed in a field basis analogous to the exponential parameterization of the nonlinear sigma model. The space of twofold Lorentz invariant field redefinitions then generates an infinitemore » class of equivalent representations. By construction, all off-shell Feynman diagrams are twofold Lorentz invariant while all on-shell tree amplitudes are automatically twofold gauge invariant. We extend our results to curved spacetime and calculate the analogue of the Einstein equations. Finally, while these twofold invariances are hidden in the canonical approach of graviton perturbation theory, they are naturally expected given the double copy relations for scattering amplitudes in gauge theory and gravity.« less

  18. Cloud-System Resolving Models: Status and Prospects

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncreiff, Mitch

    2008-01-01

    Cloud-system resolving models (CRM), which are based on the nonhydrostatic equations of motion and typically have a grid-spacing of about a kilometer, originated as cloud-process models in the 1970s. This paper reviews the status and prospects of CRMs across a wide range of issues, such as microphysics and precipitation; interaction between clouds and radiation; and the effects of boundary-layer and surface-processes on cloud systems. Since CRMs resolve organized convection, tropical waves and the large-scale circulation, there is the prospect for several advances in both basic knowledge of scale-interaction requisite to parameterizing mesoscale processes in climate models. In superparameterization, CRMs represent convection, explicitly replacing many of the assumptions necessary in contemporary parameterization. Global CRMs have been run on an experimental basis, giving prospect to a new generation of climate weather prediction in a decade, and climate models due course. CRMs play a major role in the retrieval of surface-rain and latent heating from satellite measurements. Finally, enormous wide dynamic ranges of CRM simulations present new challenges for model validation against observations.

  19. Twofold symmetries of the pure gravity action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Clifford; Remmen, Grant N.

    Here, we recast the action of pure gravity into a form that is invariant under a twofold Lorentz symmetry. To derive this representation, we construct a general parameterization of all theories equivalent to the Einstein-Hilbert action up to a local field redefinition and gauge fixing. We then exploit this freedom to eliminate all interactions except those exhibiting two sets of independently contracted Lorentz indices. The resulting action is local, remarkably simple, and naturally expressed in a field basis analogous to the exponential parameterization of the nonlinear sigma model. The space of twofold Lorentz invariant field redefinitions then generates an infinitemore » class of equivalent representations. By construction, all off-shell Feynman diagrams are twofold Lorentz invariant while all on-shell tree amplitudes are automatically twofold gauge invariant. We extend our results to curved spacetime and calculate the analogue of the Einstein equations. Finally, while these twofold invariances are hidden in the canonical approach of graviton perturbation theory, they are naturally expected given the double copy relations for scattering amplitudes in gauge theory and gravity.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Na; Zhang, Peng; Kang, Wei

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less

  1. Betatron motion with coupling of horizontal and vertical degrees of freedom

    DOE PAGES

    Lebedev, V. A.; Bogacz, S. A.

    2010-10-21

    Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less

  2. Betatron motion with coupling of horizontal and vertical degrees of freedom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebedev, V. A.; Bogacz, S. A.

    Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less

  3. Atmospheric form drag over Arctic sea ice derived from high-resolution IceBridge elevation data

    NASA Astrophysics Data System (ADS)

    Petty, A.; Tsamados, M.; Kurtz, N. T.

    2016-02-01

    Here we present a detailed analysis of atmospheric form drag over Arctic sea ice, using high resolution, three-dimensional surface elevation data from the NASA Operation IceBridge Airborne Topographic Mapper (ATM) laser altimeter. Surface features in the sea ice cover are detected using a novel feature-picking algorithm. We derive information regarding the height, spacing and orientation of unique surface features from 2009-2014 across both first-year and multiyear ice regimes. The topography results are used to explicitly calculate atmospheric form drag coefficients; utilizing existing form drag parameterizations. The atmospheric form drag coefficients show strong regional variability, mainly due to variability in ice type/age. The transition from a perennial to a seasonal ice cover therefore suggest a decrease in the atmospheric form drag coefficients over Arctic sea ice in recent decades. These results are also being used to calibrate a recent form drag parameterization scheme included in the sea ice model CICE, to improve the representation of form drag over Arctic sea ice in global climate models.

  4. Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2017-07-01

    Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.

  5. From Neutron Star Observables to the Equation of State. II. Bayesian Inference of Equation of State Pressures

    NASA Astrophysics Data System (ADS)

    Raithel, Carolyn A.; Özel, Feryal; Psaltis, Dimitrios

    2017-08-01

    One of the key goals of observing neutron stars is to infer the equation of state (EoS) of the cold, ultradense matter in their interiors. Here, we present a Bayesian statistical method of inferring the pressures at five fixed densities, from a sample of mock neutron star masses and radii. We show that while five polytropic segments are needed for maximum flexibility in the absence of any prior knowledge of the EoS, regularizers are also necessary to ensure that simple underlying EoS are not over-parameterized. For ideal data with small measurement uncertainties, we show that the pressure at roughly twice the nuclear saturation density, {ρ }{sat}, can be inferred to within 0.3 dex for many realizations of potential sources of uncertainties. The pressures of more complicated EoS with significant phase transitions can also be inferred to within ˜30%. We also find that marginalizing the multi-dimensional parameter space of pressure to infer a mass-radius relation can lead to biases of nearly 1 km in radius, toward larger radii. Using the full, five-dimensional posterior likelihoods avoids this bias.

  6. Existence and numerical simulation of periodic traveling wave solutions to the Casimir equation for the Ito system

    NASA Astrophysics Data System (ADS)

    Abbasbandy, S.; Van Gorder, R. A.; Hajiketabi, M.; Mesrizadeh, M.

    2015-10-01

    We consider traveling wave solutions to the Casimir equation for the Ito system (a two-field extension of the KdV equation). These traveling waves are governed by a nonlinear initial value problem with an interesting nonlinearity (which actually amplifies in magnitude as the size of the solution becomes small). The nonlinear problem is parameterized by two initial constant values, and we demonstrate that the existence of solutions is strongly tied to these parameter values. For our interests, we are concerned with positive, bounded, periodic wave solutions. We are able to classify parameter regimes which admit such solutions in full generality, thereby obtaining a nice existence result. Using the existence result, we are then able to numerically simulate the positive, bounded, periodic solutions. We elect to employ a group preserving scheme in order to numerically study these solutions, and an outline of this approach is provided. The numerical simulations serve to illustrate the properties of these solutions predicted analytically through the existence result. Physically, these results demonstrate the existence of a type of space-periodic structure in the Casimir equation for the Ito model, which propagates as a traveling wave.

  7. Operational Ocean Modelling with the Harvard Ocean Prediction System

    DTIC Science & Technology

    2008-11-01

    tno.nl TNO-rapportnummer TNO-DV2008 A417 Opdrachtnummer Datum november 2008 Auteur (s) dr. F.P.A. Lam dr. ir. M.W. Schouten dr. L.A. te Raa...area of theory and implementation of numerical schemes and parameterizations, ocean models have grown from experimental tools to full-blown ocean...sound propagation through mesoscale features using 3-D coupled mode theory , Thesis, Naval Postgraduate School, Monterey, USA. 1992. [9] Robinson

  8. Polarimetric Intensity Parameterization of Radar and Other Remote Sensing Sources for Advanced Exploitation and Data Fusion: Theory

    DTIC Science & Technology

    2008-10-01

    is theoretically similar to the concept of “partial or compact polarimetry”, yields comparable results to full or quadrature-polarized systems by...to the emerging “compact polarimetry” methodology [9]-[13] that exploits scattering system response to an incomplete set of input EM field components...a scattering operator or matrix. Although as theoretically discussed earlier, performance of such fully-polarized radar system (i.e., quadrature

  9. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  10. Characterizing the growth to detonation in HNS with small-scale PDV "cutback" experiments

    NASA Astrophysics Data System (ADS)

    Wixom, Ryan R.; Yarrington, Cole D.; Knepper, Robert; Tappan, Alexander S.; Olles, Joseph D.; Damm, David L.

    2017-01-01

    For many decades, cutback experiments have been used to characterize the equation of state and growth to steady detonation in explosive formulations. More recently, embedded gauges have been used to capture the growth to steady detonation in gas-gun impacted samples. Data resulting from these experiments are extremely valuable for parameterizing equation of state and reaction models used in hydrocode simulations. Due to the extremely fast growth to detonation in typical detonator explosives, cutback and embedded gauge experiments are particularly difficult, if not impossible. Using frequency shifted photonic Doppler velocimetry (PDV) we have measured particle velocity histories from vapor-deposited explosive films impacted with electrically driven flyers. By varying the sample thickness and impact conditions we were able to capture the growth from inert shock to full detonation pressure within distances as short as 100 µm. These data are being used to assess and improve burn-model parameterization and equations of state for simulating shock initiation.

  11. Methane Sensitivity to Perturbations in Tropospheric Oxidizing Capacity

    NASA Technical Reports Server (NTRS)

    Yegorova, Elena; Duncan, Bryan

    2011-01-01

    Methane is an important greenhouse gas and has a 25 times greater global warming potential than CO2 on a century timescale. Yet there are considerable uncertainties in the magnitude and variability of its sources and sinks. The response of the coupled non-linear methane-carbon monoxide-hydroxyl radical (OH) system is important in determining the tropospheric oxidizing capacity. Using the NASA Goddard Earth Observing System, Version 5 (GEOS-5) chemistry climate model, we study the response of methane to perturbations of OH and wetland emissions. We use a computationally-efficient option of the GEOS-5 CCM that includes an OH parameterization that accurately represents OH predicted by a full chemical mechanism. The OH parameterization allows for studying non-linear CH4-CO-OH feedbacks in computationally fast sensitivity experiments. We compare our results with surface observations (GMD) and discuss the range of uncertainty in OH and wetland emissions required to bring modeling results in better agreement with surface observations. Our results can be used to improve projections of methane emissions and methane growth.

  12. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE PAGES

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...

    2017-09-14

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  13. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  14. Numerical simulations of Hurricane Katrina (2005) in the turbulent gray zone

    NASA Astrophysics Data System (ADS)

    Green, Benjamin W.; Zhang, Fuqing

    2015-03-01

    Current numerical simulations of tropical cyclones (TCs) use a horizontal grid spacing as small as Δx = 103 m, with all boundary layer (BL) turbulence parameterized. Eventually, TC simulations can be conducted at Large Eddy Simulation (LES) resolution, which requires Δx to fall in the inertial subrange (often <102 m) to adequately resolve the large, energy-containing eddies. Between the two lies the so-called "terra incognita" because some of the assumptions used by mesoscale models and LES to treat BL turbulence are invalid. This study performs several 4-6 h simulations of Hurricane Katrina (2005) without a BL parameterization at extremely fine Δx [333, 200, and 111 m, hereafter "Large Eddy Permitting (LEP) runs"] and compares with mesoscale simulations with BL parameterizations (Δx = 3 km, 1 km, and 333 m, hereafter "PBL runs"). There are profound differences in the hurricane BL structure between the PBL and LEP runs: the former have a deeper inflow layer and secondary eyewall formation, whereas the latter have a shallow inflow layer without a secondary eyewall. Among the LEP runs, decreased Δx yields weaker subgrid-scale vertical momentum fluxes, but the sum of subgrid-scale and "grid-scale" fluxes remain similar. There is also evidence that the size of the prevalent BL eddies depends upon Δx, suggesting that convergence to true LES has not yet been reached. Nevertheless, the similarities in the storm-scale BL structure among the LEP runs indicate that the net effect of the BL on the rest of the hurricane may be somewhat independent of Δx.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, May Wai San; Ovchinnikov, Mikhail; Wang, Minghuai

    Potential ways of parameterizing vertical turbulent fluxes of hydrometeors are examined using a high-resolution cloud-resolving model. The cloud-resolving model uses the Morrison microphysics scheme, which contains prognostic variables for rain, graupel, ice, and snow. A benchmark simulation with a horizontal grid spacing of 250 m of a deep convection case carried out to evaluate three different ways of parameterizing the turbulent vertical fluxes of hydrometeors: an eddy-diffusion approximation, a quadrant-based decomposition, and a scaling method that accounts for within-quadrant (subplume) correlations. Results show that the down-gradient nature of the eddy-diffusion approximation tends to transport mass away from concentrated regions, whereasmore » the benchmark simulation indicates that the vertical transport tends to transport mass from below the level of maximum to aloft. Unlike the eddy-diffusion approach, the quadri-modal decomposition is able to capture the signs of the flux gradient but underestimates the magnitudes. The scaling approach is shown to perform the best by accounting for within-quadrant correlations, and improves the results for all hydrometeors except for snow. A sensitivity study is performed to examine how vertical transport may affect the microphysics of the hydrometeors. The vertical transport of each hydrometeor type is artificially suppressed in each test. Results from the sensitivity tests show that cloud-droplet-related processes are most sensitive to suppressed rain or graupel transport. In particular, suppressing rain or graupel transport has a strong impact on the production of snow and ice aloft. Lastly, a viable subgrid-scale hydrometeor transport scheme in an assumed probability density function parameterization is discussed.« less

  16. Evaluation of the WRF-Urban Modeling System Coupled to Noah and Noah-MP Land Surface Models Over a Semiarid Urban Environment

    NASA Astrophysics Data System (ADS)

    Salamanca, Francisco; Zhang, Yizhou; Barlage, Michael; Chen, Fei; Mahalov, Alex; Miao, Shiguang

    2018-03-01

    We have augmented the existing capabilities of the integrated Weather Research and Forecasting (WRF)-urban modeling system by coupling three urban canopy models (UCMs) available in the WRF model with the new community Noah with multiparameterization options (Noah-MP) land surface model (LSM). The WRF-urban modeling system's performance has been evaluated by conducting six numerical experiments at high spatial resolution (1 km horizontal grid spacing) during a 15 day clear-sky summertime period for a semiarid urban environment. To assess the relative importance of representing urban surfaces, three different urban parameterizations are used with the Noah and Noah-MP LSMs, respectively, over the two major cities of Arizona: Phoenix and Tucson metropolitan areas. Our results demonstrate that Noah-MP reproduces somewhat better than Noah the daily evolution of surface skin temperature and near-surface air temperature (especially nighttime temperature) and wind speed. Concerning the urban areas, bulk urban parameterization overestimates nighttime 2 m air temperature compared to the single-layer and multilayer UCMs that reproduce more accurately the daily evolution of near-surface air temperature. Regarding near-surface wind speed, only the multilayer UCM was able to reproduce realistically the daily evolution of wind speed, although maximum winds were slightly overestimated, while both the single-layer and bulk urban parameterizations overestimated wind speed considerably. Based on these results, this paper demonstrates that the new community Noah-MP LSM coupled to an UCM is a promising physics-based predictive modeling tool for urban applications.

  17. Characteristics of Mesoscale Organization in WRF Simulations of Convection during TWP-ICE

    NASA Technical Reports Server (NTRS)

    Del Genio, Anthony D.; Wu, Jingbo; Chen, Yonghua

    2013-01-01

    Compared to satellite-derived heating profiles, the Goddard Institute for Space Studies general circulation model (GCM) convective heating is too deep and its stratiform upper-level heating is too weak. This deficiency highlights the need for GCMs to parameterize the mesoscale organization of convection. Cloud-resolving model simulations of convection near Darwin, Australia, in weak wind shear environments of different humidities are used to characterize mesoscale organization processes and to provide parameterization guidance. Downdraft cold pools appear to stimulate further deep convection both through their effect on eddy size and vertical velocity. Anomalously humid air surrounds updrafts, reducing the efficacy of entrainment. Recovery of cold pool properties to ambient conditions over 5-6 h proceeds differently over land and ocean. Over ocean increased surface fluxes restore the cold pool to prestorm conditions. Over land surface fluxes are suppressed in the cold pool region; temperature decreases and humidity increases, and both then remain nearly constant, while the undisturbed environment cools diurnally. The upper-troposphere stratiform rain region area lags convection by 5-6 h under humid active monsoon conditions but by only 1-2 h during drier break periods, suggesting that mesoscale organization is more readily sustained in a humid environment. Stratiform region hydrometeor mixing ratio lags convection by 0-2 h, suggesting that it is strongly influenced by detrainment from convective updrafts. Small stratiform region temperature anomalies suggest that a mesoscale updraft parameterization initialized with properties of buoyant detrained air and evolving to a balance between diabatic heating and adiabatic cooling might be a plausible approach for GCMs.

  18. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  19. EXPLORING BIASES OF ATMOSPHERIC RETRIEVALS IN SIMULATED JWST TRANSMISSION SPECTRA OF HOT JUPITERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocchetto, M.; Waldmann, I. P.; Tinetti, G.

    2016-12-10

    With a scheduled launch in 2018 October, the James Webb Space Telescope ( JWST ) is expected to revolutionize the field of atmospheric characterization of exoplanets. The broad wavelength coverage and high sensitivity of its instruments will allow us to extract far more information from exoplanet spectra than what has been possible with current observations. In this paper, we investigate whether current retrieval methods will still be valid in the era of JWST , exploring common approximations used when retrieving transmission spectra of hot Jupiters. To assess biases, we use 1D photochemical models to simulate typical hot Jupiter cloud-free atmospheresmore » and generate synthetic observations for a range of carbon-to-oxygen ratios. Then, we retrieve these spectra using TauREx, a Bayesian retrieval tool, using two methodologies: one assuming an isothermal atmosphere, and one assuming a parameterized temperature profile. Both methods assume constant-with-altitude abundances. We found that the isothermal approximation biases the retrieved parameters considerably, overestimating the abundances by about one order of magnitude. The retrieved abundances using the parameterized profile are usually within 1 σ of the true state, and we found the retrieved uncertainties to be generally larger compared to the isothermal approximation. Interestingly, we found that by using the parameterized temperature profile we could place tight constraints on the temperature structure. This opens the possibility of characterizing the temperature profile of the terminator region of hot Jupiters. Lastly, we found that assuming a constant-with-altitude mixing ratio profile is a good approximation for most of the atmospheres under study.« less

  20. Extensions and applications of a second-order landsurface parameterization

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1983-01-01

    Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.

  1. Development and Testing of Coupled Land-surface, PBL and Shallow/Deep Convective Parameterizations within the MM5

    NASA Technical Reports Server (NTRS)

    Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.

    2000-01-01

    The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.

  2. A Simple Parameterization of 3 x 3 Magic Squares

    ERIC Educational Resources Information Center

    Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich

    2012-01-01

    In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.

  3. Implementation and Testing of Advanced Surface Boundary Conditions Over Complex Terrain in A Semi-idealized Model

    NASA Astrophysics Data System (ADS)

    Li, Y.; Epifanio, C.

    2017-12-01

    In numerical prediction models, the interaction between the Earth's surface and the atmosphere is typically accounted for in terms of surface layer parameterizations, whose main job is to specify turbulent fluxes of heat, moisture and momentum across the lower boundary of the model domain. In the case of a domain with complex geometry, implementing the flux conditions (particularly the tensor stress condition) at the boundary can be somewhat subtle, and there has been a notable history of confusion in the CFD community over how to formulate and impose such conditions generally. In the atmospheric case, modelers have largely been able to avoid these complications, at least until recently, by assuming that the terrain resolved at typical model resolutions is fairly gentle, in the sense of having relatively shallow slopes. This in turn allows the flux conditions to be imposed as if the lower boundary were essentially flat. Unfortunately, while this flat-boundary assumption is acceptable for coarse resolutions, as grids become more refined and the geometry of the resolved terrain becomes more complex, the appproach is less justified. With this in mind, the goal of our present study is to explore the implementation and usage of the full, unapproximated version of the turbulent flux/stress conditions in atmospheric models, thus taking full account of the complex geometry of the resolved terrain. We propose to implement the conditions using a semi-idealized model developed by Epifanio (2007), in which the discretized boundary conditions are reduced to a large, sparse-matrix problem. The emphasis will be on fluxes of momentum, as the tensor nature of this flux makes the associated stress condition more difficult to impose, although the flux conditions for heat and moisture will be considered as well. With the resulotion of 90 meters, some of the results show that the typical differences between flat-boundary cases and full/stress cases are on the order of 10%, with extreme cases reaching as high as 30% based on typical disturbance wind speeds. And this difference dropping by a factor of six between grid spacings of 90 meters and 240 meters. It would thus appear that the need to apply the full stress condition is limited to relatively high-resolution modeling, with grid spacings on the order of 250 meters or less.

  4. Dynamically Consistent Parameterization of Mesoscale Eddies This work aims at parameterization of eddy effects for use in non-eddy-resolving ocean models and focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones.

    NASA Astrophysics Data System (ADS)

    Berloff, P. S.

    2016-12-01

    This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.

  5. Dissecting the accountability of parameterized and parameter-free single-hybrid and double-hybrid functionals for photophysical properties of TADF-based OLEDs

    NASA Astrophysics Data System (ADS)

    Alipour, Mojtaba; Karimi, Niloofar

    2017-06-01

    Organic light emitting diodes (OLEDs) based on thermally activated delayed fluorescence (TADF) emitters are an attractive category of materials that have witnessed a booming development in recent years. In the present contribution, we scrutinize the accountability of parameterized and parameter-free single-hybrid (SH) and double-hybrid (DH) functionals through the two formalisms, full time-dependent density functional theory (TD-DFT) and Tamm-Dancoff approximation (TDA), for the estimation of photophysical properties like absorption energy, emission energy, zero-zero transition energy, and singlet-triplet energy splitting of TADF molecules. According to our detailed analyses on the performance of SHs based on TD-DFT and TDA, the TDA-based parameter-free SH functionals, PBE0 and TPSS0, with one-third of exact-like exchange turned out to be the best performers in comparison to other functionals from various rungs to reproduce the experimental data of the benchmarked set. Such affordable SH approximations can thus be employed to predict and design the TADF molecules with low singlet-triplet energy gaps for OLED applications. From another perspective, considering this point that both the nonlocal exchange and correlation are essential for a more reliable description of large charge-transfer excited states, applicability of the functionals incorporating these terms, namely, parameterized and parameter-free DHs, has also been evaluated. Perusing the role of exact-like exchange, perturbative-like correlation, solvent effects, and other related factors, we find that the parameterized functionals B2π-PLYP and B2GP-PLYP and the parameter-free models PBE-CIDH and PBE-QIDH have respectable performance with respect to others. Lastly, besides the recommendation of reliable computational protocols for the purpose, hopefully this study can pave the way toward further developments of other SHs and DHs for theoretical explorations in the field of OLEDs technology.

  6. Gradient-based adaptation of general gaussian kernels.

    PubMed

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  7. Analysis and parameterization of absorption properties of northern Norwegian coastal water

    NASA Astrophysics Data System (ADS)

    Nima, Ciren; Frette, Øyvind; Hamre, Børge; Erga, Svein Rune; Chen, Yi-Chun; Zhao, Lu; Sørensen, Kai; Norli, Marit; Stamnes, Knut; Muyimbwa, Dennis; Ssenyonga, Taddeo; Ssebiyonga, Nicolausi; Stamnes, Jakob J.

    2017-02-01

    Coastal water bodies are generally classified as Case 2 water, in which non-algal particles (NAP) and colored dissolved organic matter (CDOM) contribute significantly to the optical properties in addition to phytoplankton. These three constituents vary independently in Case 2 water and tend to be highly variable in space and time. We present data from measurements and analyses of the spectral absorption due to CDOM, total suspended matter (TSM), phytoplankton, and NAP in high-latitude northern Norwegian coastal water based on samples taken in spring, summer, and autumn.

  8. Kinematic functions for the 7 DOF robotics research arm

    NASA Technical Reports Server (NTRS)

    Kreutz, K.; Long, M.; Seraji, Homayoun

    1989-01-01

    The Robotics Research Model K-1207 manipulator is a redundant 7R serial link arm with offsets at all joints. To uniquely determine joint angles for a given end-effector configuration, the redundancy is parameterized by a scalar variable which corresponds to the angle between the manipulator elbow plane and the vertical plane. The forward kinematic mappings from joint-space to end-effector configuration and elbow angle, and the augmented Jacobian matrix which gives end-effector and elbow angle rates as a function of joint rates, are also derived.

  9. Development of a two-dimensional zonally averaged statistical-dynamical model. III - The parameterization of the eddy fluxes of heat and moisture

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Yao, Mao-Sung

    1990-01-01

    A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.

  10. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  11. Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.

    2015-04-18

    With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.

  12. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  13. Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent; Gettelman, Andrew; Morrison, Hugh

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less

  14. Treatment of temporal aliasing effects in the context of next generation satellite gravimetry missions

    NASA Astrophysics Data System (ADS)

    Daras, Ilias; Pail, Roland

    2017-09-01

    Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.

  15. Analysis of Surface Heterogeneity Effects with Mesoscale Terrestrial Modeling Platforms

    NASA Astrophysics Data System (ADS)

    Simmer, C.

    2015-12-01

    An improved understanding of the full variability in the weather and climate system is crucial for reducing the uncertainty in weather forecasting and climate prediction, and to aid policy makers to develop adaptation and mitigation strategies. A yet unknown part of uncertainty in the predictions from the numerical models is caused by the negligence of non-resolved land surface heterogeneity and the sub-surface dynamics and their potential impact on the state of the atmosphere. At the same time, mesoscale numerical models using finer horizontal grid resolution [O(1)km] can suffer from inconsistencies and neglected scale-dependencies in ABL parameterizations and non-resolved effects of integrated surface-subsurface lateral flow at this scale. Our present knowledge suggests large-eddy-simulation (LES) as an eventual solution to overcome the inadequacy of the physical parameterizations in the atmosphere in this transition scale, yet we are constrained by the computational resources, memory management, big-data, when using LES for regional domains. For the present, there is a need for scale-aware parameterizations not only in the atmosphere but also in the land surface and subsurface model components. In this study, we use the recently developed Terrestrial Systems Modeling Platform (TerrSysMP) as a numerical tool to analyze the uncertainty in the simulation of surface exchange fluxes and boundary layer circulations at grid resolutions of the order of 1km, and explore the sensitivity of the atmospheric boundary layer evolution and convective rainfall processes on land surface heterogeneity.

  16. Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust

    NASA Astrophysics Data System (ADS)

    Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.

    2016-05-01

    Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.

  17. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  18. Parameterization Interactions in Global Aquaplanet Simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.

    2018-02-01

    Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.

  19. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  20. Brain Surface Conformal Parameterization Using Riemann Surface Structure

    PubMed Central

    Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung

    2011-01-01

    In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336

  1. Flexible climate modeling systems: Lessons from Snowball Earth, Titan and Mars

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, R. T.

    2007-12-01

    Climate models are only useful to the extent that real understanding can be extracted from them. Most leading- edge problems in climate change, paleoclimate and planetary climate require a high degree of flexibility in terms of incorporating model physics -- for example in allowing methane or CO2 to be a condensible substance instead of water vapor. This puts a premium on model design that allows easy modification, and on physical parameterizations that are close to fundamentals with as little empirical ad-hoc formulation as possible. I will provide examples from two approaches to this problem we have been using at the University of Chicago. The first is the FOAM general circulation model, which is a clean single-executable Fortran-77/c code supported by auxiliary applications in Python and Java. The second is a new approach based on using Python as a shell for assembling building blocks in compiled-code into full models. Applications to Snowball Earth, Titan and Mars, as well as pedagogical uses, will be discussed. One painful lesson we have learned is that Fortran-95 is a major impediment to portability and cross-language interoperability; in this light the trend toward Fortran-95 in major modelling groups is seen as a significant step backwards. In this talk, I will focus on modeling projects employing a full representation of atmospheric fluid dynamics, rather than "intermediate complexity" models in which the associated transports are parameterized.

  2. Impact of Apex Model parameterization strategy on estimated benefit of conservation practices

    USDA-ARS?s Scientific Manuscript database

    Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...

  3. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-03-18

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less

  4. Planck 2015 results: XIV. Dark energy and modified gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.

    For this research, we study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forcedmore » to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. Finally, when testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.« less

  5. Wave modeling for the Beaufort and Chukchi Seas

    NASA Astrophysics Data System (ADS)

    Rogers, W.; Thomson, J.; Shen, H. H.; Posey, P. G.; Hebert, D. A.

    2016-02-01

    Authors: W. Erick Rogers(1), Jim Thomson(2), Hayley Shen (3), PamelaPosey (1), David Hebert (1) 1 Naval Research Laboratory, Stennis Space Center, Mississippi, USA2 Applied Physics Laboratory, University of Washington, Seattle,Washington, USA3 Clarkson University, Potsdam, New York, USA Abstract : In this presentation, we will discuss the development and application of numerical models for prediction of wind-generated surface gravity waves to the Arctic Ocean, and specifically the Beaufort and Chukchi Seas, for which the Office of Naval Research (ONR) has supported two major field campaigns in 2014 and 2015. The modeling platform is the spectral wave model WAVEWATCH III (R) (WW3). We will begin by reviewing progress with the model numerics in 2007 and 2008 which permits efficient application at high latitudes. Then, we will discuss more recent progress (2012 to 2015) adding new physics to WW3 for ice effects. The latter include two parameterizations for dissipation by turbulence at the ice/water interface, and a more complex parameterization which treat the ice as a viscoelastic fluid. With these new physics, the primary challenge is to find observational data suitable for calibration of the parameterization, and there are concerns about validity of application of any calibration to the wide variety of ice types that exist in the Arctic (or Southern Ocean). Quality of input is another major challenge, for which some recent progress has been made (at least in the context of ice concentration and ice edge) with data assimilative ice modeling at NRL. We will discuss our recent work to invert for dissipation rate using data from a 2012 mooring in the Beaufort Sea, how the results vary by season (ice retreat vs. advance), and what this tells us in context of those complex physical parameterizations used by the model. We will summarize plans for further development of the model, such as adding scattering by floes, through collaboration with IFREMER (France), and improving on the simple "proportional scaling" treatment of the open water source functions in presence of partial ice cover. Finally, we will discuss lessons learned for wave modeling from the autumn 2015 R/V Sikuliaq cruise supported by ONR.

  6. A Framework for Characterizing how Ice Crystal Size Distributions, Mass-Dimensional and Area-Dimensional Relations Vary with Environmental and Aerosol Properties

    NASA Astrophysics Data System (ADS)

    McFarquhar, G. M.; Finlon, J.; Um, J.; Nesbitt, S. W.; Borque, P.; Chase, R.; Wu, W.; Morrison, H.; Poellot, M.

    2017-12-01

    Parameterizations of fall speed-dimension (V-D), mass (m)-D and projected area (A)-D relationships are needed for development of model parameterization and remote sensing retrieval schemes. An approach for deriving such relations is discussed here that improves upon previously developed schemes in the following aspects: 1) surfaces are used to characterize uncertainties in derived coefficients; 2) all derived relations are internally consistent; and 3) multiple bulk measures are used to derive parameter coefficients. In this study, data collected by two-dimensional optical array probes (OAPs) installed on the University of North Dakota Citation aircraft during the Mid-Latitude Continental Convective Clouds Experiment (MC3E) and during the Olympic Mountains Experiment (OLYMPEX) are used in conjunction with data from a Nevzorov total water content (TWC) probe and ground-based radar data at S-band to test a novel approach that determines m-D relationships for a variety of environments. A surface of equally realizable a and b coefficients, where m=aDb, in (a,b) phase space is determined using a technique that minimizes the chi-squared difference between both the TWC and radar reflectivity Z derived from the size distributions measured by the OAPs and those directly measured by a TWC probe and radar, accepting as valid all coefficients within a specified tolerance of the minimum chi-squared difference. Because both A and perimeter P can be directly measured by OAPs, coefficients characterizing these relationships are derived using only one bulk parameter constraint derived from the appropriate images. Because terminal velocity parameterizations depend on both A and m, V-D relations can be derived from these self-consistent relations. Using this approach, changes in parameters associated with varying environmental conditions and varying aerosol amounts and compositions can be isolated from changes associated with statistical noise or measurement errors. The applicability of the derived coefficients for a stochastic framework that employs an observationally-constrained dataset to account for coefficient variability within microphysics parameterization schemes is discussed.

  7. Planck 2015 results: XIV. Dark energy and modified gravity

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...

    2016-09-20

    For this research, we study the implications of Planck data for models of dark energy (DE) and modified gravity (MG) beyond the standard cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state w(a), as well as principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints and find that it has to be below ~2% (at 95% confidence) of the critical density, even when forcedmore » to play a role for z < 50 only. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories, and coupled DE. In addition to the latest Planck data, for our main analyses, we use background constraints from baryonic acoustic oscillations, type-Ia supernovae, and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations (expansion of the equation of state, early DE, general potentials in minimally-coupled scalar fields or principal component analysis) are in agreement with ΛCDM. Finally, when testing models that also change perturbations (even when the background is fixed to ΛCDM), some tensions appear in a few scenarios: the maximum one found is ~2σ for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to, at most, 3σ when external data sets are included. It however disappears when including CMB lensing.« less

  8. Importance of ensembles in projecting regional climate trends

    NASA Astrophysics Data System (ADS)

    Arritt, Raymond; Daniel, Ariele; Groisman, Pavel

    2016-04-01

    We have performed an ensemble of simulations using RegCM4 to examine the ability to reproduce observed trends in precipitation intensity and to project future changes through the 21st century for the central United States. We created a matrix of simulations over the CORDEX North America domain for 1950-2099 by driving the regional model with two different global models (HadGEM2-ES and GFDL-ESM2M, both for RCP8.5), by performing simulations at both 50 km and 25 km grid spacing, and by using three different convective parameterizations. The result is a set of 12 simulations (two GCMs by two resolutions by three convective parameterizations) that can be used to systematically evaluate the influence of simulation design on predicted precipitation. The two global models were selected to bracket the range of climate sensitivity in the CMIP5 models: HadGEM2-ES has the highest ECS of the CMIP5 models, while GFDL-ESM2M has one of the lowestt. Our evaluation metrics differ from many other RCM studies in that we focus on the skill of the models in reproducing past trends rather than the mean climate state. Trends in frequency of extreme precipitation (defined as amounts exceeding 76.2 mm/day) for most simulations are similar to the observed trend but with notable variations depending on RegCM4 configuration and on the driving GCM. There are complex interactions among resolution, choice of convective parameterization, and the driving GCM that carry over into the future climate projections. We also note that biases in the current climate do not correspond to biases in trends. As an example of these points the Emanuel scheme is consistently "wet" (positive bias in precipitation) yet it produced the smallest precipitation increase of the three convective parameterizations when used in simulations driven by HadGEM2-ES. However, it produced the largest increase when driven by GFDL-ESM2M. These findings reiterate that ensembles using multiple RCM configurations and driving GCMs are essential for projecting regional climate change, even when a single RCM is used. This research was sponsored by the U.S. Department of Agriculture National Institute of Food and Agriculture.

  9. Tropical Cumulus Convection and Upward Propagating Waves in Middle Atmospheric GCMs

    NASA Technical Reports Server (NTRS)

    Horinouchi, T.; Pawson, S.; Shibata, K.; Langematz, U.; Manzini, E.; Giorgetta, M. A.; Sassi, F.; Wilson, R. J.; Hamilton, K. P.; deGranpre, J.; hide

    2002-01-01

    It is recognized that the resolved tropical wave spectrum can vary considerably between general circulation models (GCMs) and that these differences can have an important impact on the simulated climate. A comprehensive comparison of the waves is presented for the December-January-February period using high-frequency (three-hourly) data archives from eight GCMs and one simple model participating in the GCM Reality Intercomparison Project for SPARC (GRIPS). Quantitative measures of the structure and causes of the wavenumber-frequency structure of resolved waves and their impacts on the climate are given. Space-time spectral analysis reveals that the wave spectrum throughout the middle atmosphere is linked to variability of convective precipitation, which is determined by the parameterized convection. The variability of the precipitation spectrum differs by more than an order of magnitude between the models, with additional changes in the spectral distribution (especially the frequency). These differences can be explained primarily by the choice of different, cumulus par amet erizations: quasi-equilibrium mass-flux schemes tend to produce small variability, while the moist-convective adjustment scheme is most active. Comparison with observational estimates of precipitation variability suggests that the model values are scattered around the truth. This result indicates that a significant portion of the forcing of the equatorial quasi-biennial oscillation (QBO) is provided by waves with scales that are not resolved in present-day GCMs, since only the moist convective adjustment scheme (which has the largest transient variability) can force a QBO in models that have no parameterization of non-stationary gravity waves. Parameterized cumulus convection also impacts the nonmigrating tides in the equatorial region. In most of the models, momentum transport by diurnal nonmigrating tides in the mesosphere is larger than that by Kelvin waves, being more significant than has been thought. It is shown that the equatorial semi-annual oscillation in the models examined is driven mainly by gravity waves with periods shorter than three days, with at least some contribution from parameterized gravity waves; the contribution from the ultra-fast zonal wavenumber-1 Kelvin waves is negligible.

  10. Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil

    USDA-ARS?s Scientific Manuscript database

    The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...

  11. Climate and the equilibrium state of land surface hydrology parameterizations

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.

  12. Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.

    2002-01-01

    Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.

  13. Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Fox-Kemper, B.

    2016-02-01

    The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  14. Controllers, observers, and applications thereof

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)

    2011-01-01

    Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.

  15. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  16. Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule

    PubMed Central

    Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav

    2007-01-01

    The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.

  17. Spectral cumulus parameterization based on cloud-resolving model

    NASA Astrophysics Data System (ADS)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  18. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.

  19. Why is the simulated climatology of tropical cyclones so sensitive to the choice of cumulus parameterization scheme in the WRF model?

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Wang, Yuqing

    2018-01-01

    The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.

  20. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  1. High Resolution Climate Modeling of the Water Cycle over the Contiguous United States Including Potential Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    Rasmussen, R.; Ikeda, K.; Liu, C.; Gochis, D.; Chen, F.; Barlage, M. J.; Dai, A.; Dudhia, J.; Clark, M. P.; Gutmann, E. D.; Li, Y.

    2015-12-01

    The NCAR Water System program strives to improve the full representation of the water cycle in both regional and global models. Our previous high-resolution simulations using the WRF model over the Rocky Mountains revealed that proper spatial and temporal depiction of snowfall adequate for water resource and climate change purposes can be achieved with the appropriate choice of model grid spacing (< 6 km horizontal) and parameterizations. The climate sensitivity experiment consistent with expected climate change showed an altered hydrological cycle with increased fraction of rain versus snow, increased snowfall at high altitudes, earlier melting of snowpack, and decreased total runoff. In order to investigate regional differences between the Rockies and other major mountain barriers and to study climate change impacts over other regions of the contiguous U.S. (CONUS), we have expanded our prior CO Headwaters modeling study to encompass most of North America at a horizontal grid spacing of 4 km. A domain expansion provides the opportunity to assess changes in orographic precipitation across different mountain ranges in the western USA, as well as the very dominant role of convection in the eastern half of the USA. The high resolution WRF-downscaled climate change data will also become a valuable community resource for many university groups who are interested in studying regional climate changes and impacts but unable to perform such long-duration and high-resolution WRF-based downscaling simulations of their own. The scientific goals and details of the model dataset will be presented including some preliminary results.

  2. FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard C. J. Somerville

    2009-02-27

    Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less

  3. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  4. Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization

    NASA Astrophysics Data System (ADS)

    Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.

    2016-12-01

    Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.

  5. Parameterization of ALMANAC crop simulation model for non-irrigated dry bean in semi-arid temperate areas in Mexico

    USDA-ARS?s Scientific Manuscript database

    Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...

  6. A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient.

    PubMed

    Midgley, S M

    2004-01-21

    A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.

  7. Investigating the scale-adaptivity of a shallow cumulus parameterization scheme with LES

    NASA Astrophysics Data System (ADS)

    Brast, Maren; Schemann, Vera; Neggers, Roel

    2017-04-01

    In this study we investigate the scale-adaptivity of a new parameterization scheme for shallow cumulus clouds in the gray zone. The Eddy-Diffusivity Multiple Mass-Flux (or ED(MF)n ) scheme is a bin-macrophysics scheme, in which subgrid transport is formulated in terms of discretized size densities. While scale-adaptivity in the ED-component is achieved using a pragmatic blending approach, the MF-component is filtered such that only the transport by plumes smaller than the grid size is maintained. For testing, ED(MF)n is implemented in a large-eddy simulation (LES) model, replacing the original subgrid-scheme for turbulent transport. LES thus plays the role of a non-hydrostatic testing ground, which can be run at different resolutions to study the behavior of the parameterization scheme in the boundary-layer gray zone. In this range convective cumulus clouds are partially resolved. We find that at high resolutions the clouds and the turbulent transport are predominantly resolved by the LES, and the transport represented by ED(MF)n is small. This partitioning changes towards coarser resolutions, with the representation of shallow cumulus clouds becoming exclusively carried by the ED(MF)n. The way the partitioning changes with grid-spacing matches the results of previous LES studies, suggesting some scale-adaptivity is captured. Sensitivity studies show that a scale-inadaptive ED component stays too active at high resolutions, and that the results are fairly insensitive to the number of transporting updrafts in the ED(MF)n scheme. Other assumptions in the scheme, such as the distribution of updrafts across sizes and the value of the area fraction covered by updrafts, are found to affect the location of the gray zone.

  8. A revised radiation package of G-packed McICA and two-stream approximation: Performance evaluation in a global weather forecasting model

    NASA Astrophysics Data System (ADS)

    Baek, Sunghye

    2017-07-01

    For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.

  9. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.

  10. Confronting Models with Data: The GEWEX Cloud Systems Study

    NASA Technical Reports Server (NTRS)

    Randall, David; Curry, Judith; Duynkerke, Peter; Krueger, Steven; Moncrieff, Mitchell; Ryan, Brian; Starr, David OC.; Miller, Martin; Rossow, William; Tselioudis, George

    2002-01-01

    The GEWEX Cloud System Study (GCSS; GEWEX is the Global Energy and Water Cycle Experiment) was organized to promote development of improved parameterizations of cloud systems for use in climate and numerical weather prediction models, with an emphasis on the climate applications. The strategy of GCSS is to use two distinct kinds of models to analyze and understand observations of the behavior of several different types of clouds systems. Cloud-system-resolving models (CSRMs) have high enough spatial and temporal resolutions to represent individual cloud elements, but cover a wide enough range of space and time scales to permit statistical analysis of simulated cloud systems. Results from CSRMs are compared with detailed observations, representing specific cases based on field experiments, and also with statistical composites obtained from satellite and meteorological analyses. Single-column models (SCMs) are the surgically extracted column physics of atmospheric general circulation models. SCMs are used to test cloud parameterizations in an un-coupled mode, by comparison with field data and statistical composites. In the original GCSS strategy, data is collected in various field programs and provided to the CSRM Community, which uses the data to "certify" the CSRMs as reliable tools for the simulation of particular cloud regimes, and then uses the CSRMs to develop parameterizations, which are provided to the GCM Community. We report here the results of a re-thinking of the scientific strategy of GCSS, which takes into account the practical issues that arise in confronting models with data. The main elements of the proposed new strategy are a more active role for the large-scale modeling community, and an explicit recognition of the importance of data integration.

  11. Experimental study of H2SO4 aerosol nucleation at high ionization levels

    NASA Astrophysics Data System (ADS)

    Tomicic, Maja; Bødker Enghoff, Martin; Svensmark, Henrik

    2018-04-01

    One hundred and ten direct measurements of aerosol nucleation rate at high ionization levels were performed in an 8 m3 reaction chamber. Neutral and ion-induced particle formation from sulfuric acid (H2SO4) was studied as a function of ionization and H2SO4 concentration. Other species that could have participated in the nucleation, such as NH3 or organic compounds, were not measured but assumed constant, and the concentration was estimated based on the parameterization by Gordon et al. (2017). Our parameter space is thus [H2SO4] = 4×106 - 3×107 cm-3, [NH3+ org] = 2.2 ppb, T = 295 K, RH = 38 %, and ion concentrations of 1700-19 000 cm-3. The ion concentrations, which correspond to levels caused by a nearby supernova, were achieved with gamma ray sources. Nucleation rates were directly measured with a particle size magnifier (PSM Airmodus A10) at a size close to critical cluster size (mobility diameter of ˜ 1.4 nm) and formation rates at a mobility diameter of ˜ 4 nm were measured with a CPC (TSI model 3775). The measurements show that nucleation increases by around an order of magnitude when the ionization increases from background to supernova levels under fixed gas conditions. The results expand the parameterization presented in Dunne et al. (2016) and Gordon et al. (2017) (for [NH3 + org] = 2.2 ppb and T = 295 K) to lower sulfuric acid concentrations and higher ion concentrations. The results make it possible to expand the parameterization presented in Dunne et al. (2016) and Gordon et al. (2017) to higher ionization levels.

  12. Closed Loop System Identification with Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.

    2004-01-01

    High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.

  13. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  14. Pointwise regularity of parameterized affine zipper fractal curves

    NASA Astrophysics Data System (ADS)

    Bárány, Balázs; Kiss, Gergely; Kolossváry, István

    2018-05-01

    We study the pointwise regularity of zipper fractal curves generated by affine mappings. Under the assumption of dominated splitting of index-1, we calculate the Hausdorff dimension of the level sets of the pointwise Hölder exponent for a subinterval of the spectrum. We give an equivalent characterization for the existence of regular pointwise Hölder exponent for Lebesgue almost every point. In this case, we extend the multifractal analysis to the full spectrum. In particular, we apply our results for de Rham’s curve.

  15. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  16. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    PubMed

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  17. The anisotropic Wilson gauge action

    NASA Astrophysics Data System (ADS)

    Klassen, Timothy R.

    1998-11-01

    Anisotropic lattices, with a temporal lattice spacing smaller than the spatial one, allow precision Monte Carlo calculations of problems that are difficult to study otherwise: heavy quarks, glueballs, hybrids, and high temperature thermodynamics, for example. We here perform the first step required for such studies with the (quenched) Wilson gauge action, namely, the determination of the renormalized anisotropy Ξ as a function of the bare anisotropy Ξ0 and the coupling. By, essentially, comparing the finite-volume heavy quark potential where the quarks are separated along a spatial direction with that where they are separated along the time direction, we determine the relation between Ξ and Ξ0 to a fraction of 1% for weak and to 1% for strong coupling. We present a simple parameterization of this relation for 1 ⩽ Ξ ⩽ 6 and 5.5 ⩽ β ⩽ ∞, which incorporates the known one-loop result and reproduces our non-perturbative determinations within errors. Besides solving the problem of how to choose the bare anisotropies if one wants to take the continuum limit at fixed renormalized anisotropy, this parameterization also yields accurate estimates of the derivative {∂Ξ 0}/{∂Ξ} needed in thermodynamic studies.

  18. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  19. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  20. Comparison of Gravity Wave Temperature Variances from Ray-Based Spectral Parameterization of Convective Gravity Wave Drag with AIRS Observations

    NASA Technical Reports Server (NTRS)

    Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.

    2012-01-01

    The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.

  1. The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.

  2. The search for di-lepton signatures from squarks and gluinos in antiproton-proton collisions at 1.8 TeV

    NASA Astrophysics Data System (ADS)

    Genik, Richard Joyner, II

    1998-12-01

    A search for Supergravity squark and gluino decays into di-leptons is presented. A novel search strategy of optimizing kinematic thresholds at each point in the three dimensional space of m0- m1/2-tan β is employed. The model space is randomly scanned using a parameterized fast Monte Carlo. No events are observed above Standard Model background in 107.6 pb-1 of Tevatron data collected by the DØ detector between 1993-96. Exclusion contours are presented in the m0-m 1/2 plane. At the 95% confidence level, a lower limit is set on the mass of gluinos of 129 GeV/c2 and on the mass of squarks of 138 GeV/c2 for all tan β < 10.

  3. Simulating visibility under reduced acuity and contrast sensitivity.

    PubMed

    Thompson, William B; Legge, Gordon E; Kersten, Daniel J; Shakespeare, Robert A; Lei, Quan

    2017-04-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting-design communities. We validate the simulation using a letter-recognition task.

  4. Simulating Visibility Under Reduced Acuity and Contrast Sensitivity

    PubMed Central

    Thompson, William B.; Legge, Gordon E.; Kersten, Daniel J.; Shakespeare, Robert A.; Lei, Quan

    2017-01-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting design communities. We validate the simulation using a letter recognition task. PMID:28375328

  5. Integrated control-structure design

    NASA Technical Reports Server (NTRS)

    Hunziker, K. Scott; Kraft, Raymond H.; Bossi, Joseph A.

    1991-01-01

    A new approach for the design and control of flexible space structures is described. The approach integrates the structure and controller design processes thereby providing extra opportunities for avoiding some of the disastrous effects of control-structures interaction and for discovering new, unexpected avenues of future structural design. A control formulation based on Boyd's implementation of Youla parameterization is employed. Control design parameters are coupled with structural design variables to produce a set of integrated-design variables which are selected through optimization-based methodology. A performance index reflecting spacecraft mission goals and constraints is formulated and optimized with respect to the integrated design variables. Initial studies have been concerned with achieving mission requirements with a lighter, more flexible space structure. Details of the formulation of the integrated-design approach are presented and results are given from a study involving the integrated redesign of a flexible geostationary platform.

  6. The application of depletion curves for parameterization of subgrid variability of snow

    Treesearch

    C. H. Luce; D. G. Tarboton

    2004-01-01

    Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...

  7. Robust coordinated control of a dual-arm space robot

    NASA Astrophysics Data System (ADS)

    Shi, Lingling; Kayastha, Sharmila; Katupitiya, Jay

    2017-09-01

    Dual-arm space robots are more capable of implementing complex space tasks compared with single arm space robots. However, the dynamic coupling between the arms and the base will have a serious impact on the spacecraft attitude and the hand motion of each arm. Instead of considering one arm as the mission arm and the other as the balance arm, in this work two arms of the space robot perform as mission arms aimed at accomplishing secure capture of a floating target. The paper investigates coordinated control of the base's attitude and the arms' motion in the task space in the presence of system uncertainties. Two types of controllers, i.e. a Sliding Mode Controller (SMC) and a nonlinear Model Predictive Controller (MPC) are verified and compared with a conventional Computed-Torque Controller (CTC) through numerical simulations in terms of control accuracy and system robustness. Both controllers eliminate the need to linearly parameterize the dynamic equations. The MPC has been shown to achieve performance with higher accuracy than CTC and SMC in the absence of system uncertainties under the condition that they consume comparable energy. When the system uncertainties are included, SMC and CTC present advantageous robustness than MPC. Specifically, in a case where system inertia increases, SMC delivers higher accuracy than CTC and costs the least amount of energy.

  8. A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals

    NASA Technical Reports Server (NTRS)

    VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.

    2014-01-01

    A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.

  9. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  10. Parameter Trade Studies For Coherent Lidar Wind Measurements of Wind from Space

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J.; Frehlich, Rod G.

    2007-01-01

    The design of an orbiting wind profiling lidar requires selection of dozens of lidar, measurement scenario, and mission geometry parameters; in addition to prediction of atmospheric parameters. Typical mission designs do not include a thorough trade optimization of all of these parameters. We report here the integration of a recently published parameterization of coherent lidar wind velocity measurement performance with an orbiting coherent wind lidar computer simulation; and the use of these combined tools to perform some preliminary parameter trades. We use the 2006 NASA Global Wind Observing Sounder mission design as the starting point for the trades.

  11. Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets

    NASA Astrophysics Data System (ADS)

    Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.

    2018-04-01

    Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.

  12. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  13. Predictive Compensator Optimization for Head Tracking Lag in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Adelstein, Barnard D.; Jung, Jae Y.; Ellis, Stephen R.

    2001-01-01

    We examined the perceptual impact of plant noise parameterization for Kalman Filter predictive compensation of time delays intrinsic to head tracked virtual environments (VEs). Subjects were tested in their ability to discriminate between the VE system's minimum latency and conditions in which artificially added latency was then predictively compensated back to the system minimum. Two head tracking predictors were parameterized off-line according to cost functions that minimized prediction errors in (1) rotation, and (2) rotation projected into translational displacement with emphasis on higher frequency human operator noise. These predictors were compared with a parameterization obtained from the VE literature for cost function (1). Results from 12 subjects showed that both parameterization type and amount of compensated latency affected discrimination. Analysis of the head motion used in the parameterizations and the subsequent discriminability results suggest that higher frequency predictor artifacts are contributory cues for discriminating the presence of predictive compensation.

  14. Parameterization of spectral baseline directly from short echo time full spectra in 1 H-MRS.

    PubMed

    Lee, Hyeong Hun; Kim, Hyeonjin

    2017-09-01

    To investigate the feasibility of parameterizing macromolecule (MM) resonances directly from short echo time (TE) spectra rather than pre-acquired, T 1 -weighted, metabolite-nulled spectra in 1 H-MRS. Initial line parameters for metabolites and MMs were set for rat brain spectra acquired at 9.4 Tesla upon a priori knowledge. Then, MM line parameters were optimized over several steps with fixed metabolite line parameters. The proposed method was tested by estimating metabolite T 1 . The results were compared with those obtained with two existing methods. Furthermore, subject-specific, spin density-weighted, MM model spectra were generated according to the MM line parameters from the proposed method for metabolite quantification. The results were compared with those obtained with subject-specific, T 1 -weighted, metabolite-nulled spectra. The metabolite T 1 were largely in close agreement among the three methods. The spin density-weighted MM resonances from the proposed method were in good agreement with the T 1 -weighted, metabolite-nulled spectra except for the MM resonance at ∼3.2 ppm. The metabolite concentrations estimated by incorporating these two different spectral baselines were also in good agreement except for several metabolites with resonances at ∼3.2 ppm. The MM parameterization directly from short-TE spectra is feasible. Further development of the method may allow for better representation of spectral baseline with negligible T 1 -weighting. Magn Reson Med 78:836-847, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Evaluating the impacts of agricultural land management practices on water resources: A probabilistic hydrologic modeling approach.

    PubMed

    Prada, A F; Chu, M L; Guzman, J A; Moriasi, D N

    2017-05-15

    Evaluating the effectiveness of agricultural land management practices in minimizing environmental impacts using models is challenged by the presence of inherent uncertainties during the model development stage. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the applicability and robustness of the model to properly represent future or alternative scenarios. The objective of this study was to develop a framework that facilitates model parameter selection while evaluating uncertainty to assess the impacts of land management practices at the watershed scale. The model framework was applied to the Lake Creek watershed located in southwestern Oklahoma, USA. A two-step probabilistic approach was implemented to parameterize the Agricultural Policy/Environmental eXtender (APEX) model using global uncertainty and sensitivity analysis to estimate the full spectrum of total monthly water yield (WYLD) and total monthly Nitrogen loads (N) in the watershed under different land management practices. Twenty-seven models were found to represent the baseline scenario in which uncertainty of up to 29% and 400% in WYLD and N, respectively, is plausible. Changing the land cover to pasture manifested the highest decrease in N to up to 30% for a full pasture coverage while changing to full winter wheat cover can increase the N up to 11%. The methodology developed in this study was able to quantify the full spectrum of system responses, the uncertainty associated with them, and the most important parameters that drive their variability. Results from this study can be used to develop strategic decisions on the risks and tradeoffs associated with different management alternatives that aim to increase productivity while also minimizing their environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Euclidean sections of protein conformation space and their implications in dimensionality reduction

    PubMed Central

    Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong

    2014-01-01

    Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095

  17. Constraints on the coupling between dark energy and dark matter from CMB data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murgia, R.; Gariazzo, S.; Fornengo, N., E-mail: riccardo.murgia@sissa.it, E-mail: gariazzo@to.infn.it, E-mail: fornengo@to.infn.it

    2016-04-01

    We investigate a phenomenological non-gravitational coupling between dark energy and dark matter, where the interaction in the dark sector is parameterized as an energy transfer either from dark matter to dark energy or the opposite. The models are constrained by a whole host of updated cosmological data: cosmic microwave background temperature anisotropies and polarization, high-redshift supernovae, baryon acoustic oscillations, redshift space distortions and gravitational lensing. Both models are found to be compatible with all cosmological observables, but in the case where dark matter decays into dark energy, the tension with the independent determinations of H{sub 0} and σ{sub 8}, alreadymore » present for standard cosmology, increases: this model in fact predicts lower H{sub 0} and higher σ{sub 8}, mostly as a consequence of the higher amount of dark matter at early times, leading to a stronger clustering during the evolution. Instead, when dark matter is fed by dark energy, the reconstructed values of H{sub 0} and σ{sub 8} nicely agree with their local determinations, with a full reconciliation between high- and low-redshift observations. A non-zero coupling between dark energy and dark matter, with an energy flow from the former to the latter, appears therefore to be in better agreement with cosmological data.« less

  18. Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization

    NASA Technical Reports Server (NTRS)

    Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.

    2011-01-01

    The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.

  19. Observational and Modeling Studies of Clouds and the Hydrological Cycle

    NASA Technical Reports Server (NTRS)

    Somerville, Richard C. J.

    1997-01-01

    Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.

  20. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  1. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    NASA Astrophysics Data System (ADS)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  2. The zonally averaged transport characteristics of the atmosphere as determined by a general circulation model

    NASA Technical Reports Server (NTRS)

    Plumb, R. A.

    1985-01-01

    Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.

  3. A statistical comparison of cirrus particle size distributions measured using the 2-D stereo probe during the TC4, SPARTICUS, and MACPEX flight campaigns with historical cirrus datasets

    NASA Astrophysics Data System (ADS)

    Schwartz, M. Christian

    2017-08-01

    This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the comparison between the two is nevertheless more favorable. It is concluded that the older datasets continue to be useful for scientific purposes, with certain caveats, and that continuing field investigations of cirrus with more modern probes is desirable.

  4. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  5. Shortwave radiation parameterization scheme for subgrid topography

    NASA Astrophysics Data System (ADS)

    Helbig, N.; LöWe, H.

    2012-02-01

    Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMott, PJ; Suski, KJ; Hill, TCJ

    The first ever ice nucleating particle (INP) measurements to be collected at the Southern Great Plains site were made during a period from late April to June 2014, as a trial for possible longer-term measurements at the site. These measurements will also be used to lay the foundation for understanding and parameterizing (for cloud resolving modeling) the sources of these climatically important aerosols as well as to augment the existing database containing this knowledge. Siting the measurements during the spring was intended to capture INP sources in or to this region from plant, soil, dust transported over long distances, biomassmore » burning, and pollution aerosols at a time when they may influence warm-season convective clouds and precipitation. Data have been archived of real-time measurements of INP number concentrations as a function of processing conditions (temperature and relative humidity) during 18 days of sampling that spanned two distinctly different weather situations: a warm, dry and windy period with regional dust and biomass burning influences in early May, and a cooler period of frequent precipitation during early June. Precipitation delayed winter wheat harvesting, preventing intended sampling during that perturbation on atmospheric aerosols. INP concentrations were highest and most variable at all temperatures in the dry period, where we attribute the INP activity primarily to soil dust emissions. Additional offline INP analyses are underway to extend the characterization of INP to cover the entire mixed phase cloud regime from -5°C to -35°C during the full study. Initial comparisons between methods on four days show good agreement and excellent future promise. The additional offline immersion freezing data will be archived as soon as completed under separate funding. Analyses of additional specialized studies for specific attribution of INP to biological and smoke sources are continuing via the National Science Foundation and National Aeronautics and Space Administration funding that helped support instrumentation used for the measurements described herein. Aerosol Observing System aerosol data will be vital to the interpretation and parameterization of results as part of analyses for publications in preparation.« less

  7. New Satellite Estimates of Mixed-Phase Cloud Properties: A Synergistic Approach for Application to Global Satellite Imager Data

    NASA Astrophysics Data System (ADS)

    Smith, W. L., Jr.; Spangenberg, D.; Fleeger, C.; Sun-Mack, S.; Chen, Y.; Minnis, P.

    2016-12-01

    Determining accurate cloud properties horizontally and vertically over a full range of time and space scales is currently next to impossible using data from either active or passive remote sensors or from modeling systems. Passive satellite imagers provide horizontal and temporal resolution of clouds, but little direct information on vertical structure. Active sensors provide vertical resolution but limited spatial and temporal coverage. Cloud models embedded in NWP can produce realistic clouds but often not at the right time or location. Thus, empirical techniques that integrate information from multiple observing and modeling systems are needed to more accurately characterize clouds and their impacts. Such a strategy is employed here in a new cloud water content profiling technique developed for application to satellite imager cloud retrievals based on VIS, IR and NIR radiances. Parameterizations are developed to relate imager retrievals of cloud top phase, optical depth, effective radius and temperature to ice and liquid water content profiles. The vertical structure information contained in the parameterizations is characterized climatologically from cloud model analyses, aircraft observations, ground-based remote sensing data, and from CloudSat and CALIPSO. Thus, realistic cloud-type dependent vertical structure information (including guidance on cloud phase partitioning) circumvents poor assumptions regarding vertical homogeneity that plague current passive satellite retrievals. This paper addresses mixed phase cloud conditions for clouds with glaciated tops including those associated with convection and mid-latitude storm systems. Novel outcomes of our approach include (1) simultaneous retrievals of ice and liquid water content and path, which are validated with active sensor, microwave and in-situ data, and yield improved global cloud climatologies, and (2) new estimates of super-cooled LWC, which are demonstrated in aviation safety applications and validated with icing PIREPS. The initial validation is encouraging for single-layer cloud conditions. More work is needed to test and refine the method for global application in a wider range of cloud conditions. A brief overview of our current method, applications, verification, and plans for future work will be presented.

  8. Inversion of Surface Wave Phase Velocities for Radial Anisotropy to an Depth of 1200 km

    NASA Astrophysics Data System (ADS)

    Xing, Z.; Beghein, C.; Yuan, K.

    2012-12-01

    This study aims to evaluate three dimensional radial anisotropy to an depth of 1200 km. Radial anisotropy describes the difference in velocity between horizontally polarized Rayleigh waves and vertically polarized Love waves. Its presence in the uppermost 200 km mantle has well been documented by different groups, and has been regarded as an indicator of mantle convection which aligns the intrinsically anisotropic minerals, largely olivine, to form large scale anisotropy. However, there is no global agreement on whether anisotropy exists in the region below 200 km. Recent models also associate a fast vertically polarized shear wave with vertical upwelling mantle flow. The data used in this study is the globally isotropic phase velocity models of fundamental and higher mode Love and Rayleigh waves (Visser, 2008). The inclusion of higher mode surface wave phase velocity provides sensitivities to structure at depth that extends to below the transition zone. While the data is the same as used by Visser (2008), a quite different parameterization is applied. All the six parameters - five elastic parameters A, C, F, L, N and density - are now regarded as independent, which rules out possible biased conclusions induced by scaling relation method used in several previous studies to reduce the number of parameters partly due to limited computing resources. The data need to be modified by crustal corrections (Crust2.0) as we want to look at the mantle structure only. We do this by eliminating the perturbation in surface wave phase velocity caused by the difference in crustal structure with respect to the referent model PREM. Sambridge's Neighborhood Algorithm is used to search the parameter space. The introduction of such a direct search technique pales the traditional inversion method, which requires regularization or some unnecessary priori restriction on the model space. On the contrary, the new method will search the full model space, providing probability density function of each anisotropic parameter and the corresponding resolution.

  9. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  10. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  11. Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed

    NASA Astrophysics Data System (ADS)

    Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.

    2017-12-01

    Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.

  12. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etmektzoglou, A; Mishra, P; Svatos, M

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomesmore » available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly translate research ideas into machine readable scripts without programming knowledge. As an open source initiative, it also enables researcher collaboration on future developments. I am a full time employee at Varian Medical Systems, Palo Alto, California.« less

  13. Framework for Probabilistic Projections of Energy-Relevant Streamflow Indicators under Climate Change Scenarios for the U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagener, Thorsten; Mann, Michael; Crane, Robert

    2014-04-29

    This project focuses on uncertainty in streamflow forecasting under climate change conditions. The objective is to develop easy to use methodologies that can be applied across a range of river basins to estimate changes in water availability for realistic projections of climate change. There are three major components to the project: Empirical downscaling of regional climate change projections from a range of Global Climate Models; Developing a methodology to use present day information on the climate controls on the parameterizations in streamflow models to adjust the parameterizations under future climate conditions (a trading-space-for-time approach); and Demonstrating a bottom-up approach tomore » establishing streamflow vulnerabilities to climate change. The results reinforce the need for downscaling of climate data for regional applications, and further demonstrates the challenges of using raw GCM data to make local projections. In addition, it reinforces the need to make projections across a range of global climate models. The project demonstrates the potential for improving streamflow forecasts by using model parameters that are adjusted for future climate conditions, but suggests that even with improved streamflow models and reduced climate uncertainty through the use of downscaled data, there is still large uncertainty is the streamflow projections. The most useful output from the project is the bottom-up vulnerability driven approach to examining possible climate and land use change impacts on streamflow. Here, we demonstrate an inexpensive and easy to apply methodology that uses Classification and Regression Trees (CART) to define the climate and environmental parameters space that can produce vulnerabilities in the system, and then feeds in the downscaled projections to determine the probability top transitioning to a vulnerable sate. Vulnerabilities, in this case, are defined by the end user.« less

  14. Resolution analysis of marine seismic full waveform data by Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.

    2015-12-01

    The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.

  15. Testing a common ice-ocean parameterization with laboratory experiments

    NASA Astrophysics Data System (ADS)

    McConnochie, C. D.; Kerr, R. C.

    2017-07-01

    Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.

  16. Parameterization of light absorption by components of seawater in optically complex coastal waters of the Crimea Peninsula (Black Sea).

    PubMed

    Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K

    2009-03-01

    The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).

  17. A second-order Budkyo-type parameterization of landsurface hydrology

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1982-01-01

    A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.

  18. Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons

    NASA Technical Reports Server (NTRS)

    Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang

    2008-01-01

    Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.

  19. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  20. Anisotropic shear dispersion parameterization for ocean eddy transport

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott; Fox-Kemper, Baylor

    2015-11-01

    The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  1. Implementation of a Parameterization Framework for Cybersecurity Laboratories

    DTIC Science & Technology

    2017-03-01

    designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of laboratory exercises. A...is to provide the designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of...support might assist the designer of laboratory exercises to achieve the following? 1. Verify that students performed lab exercises, with some

  2. How to assess the impact of a physical parameterization in simulations of moist convection?

    NASA Astrophysics Data System (ADS)

    Grabowski, Wojciech

    2017-04-01

    A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.

  3. Handwriting: Feature Correlation Analysis for Biometric Hashes

    NASA Astrophysics Data System (ADS)

    Vielhauer, Claus; Steinmetz, Ralf

    2004-12-01

    In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  4. A self-calibrated angularly continuous 2D GRAPPA kernel for propeller trajectories

    PubMed Central

    Skare, Stefan; Newbould, Rexford D; Nordell, Anders; Holdsworth, Samantha J; Bammer, Roland

    2008-01-01

    The k-space readout of propeller-type sequences may be accelerated by the use of parallel imaging (PI). For PROPELLER, the main benefits are reduced blurring due to T2 decay and SAR reduction, while for EPI-based propeller acquisitions such as Turbo-PROP and SAP-EPI, the faster k-space traversal alleviates geometric distortions. In this work, the feasibility of calculating a 2D GRAPPA kernel on only the undersampled propeller blades themselves is explored, using the matching orthogonal undersampled blade. It is shown that the GRAPPA kernel varies slowly across blades, therefore an angularly continuous 2D GRAPPA kernel is proposed, in which the angular variation of the weights is parameterized. This new angularly continuous kernel formulation greatly increases the numerical stability of the GRAPPA weight estimation, allowing the generation of fully sampled diagnostic quality images using only the undersampled propeller data. PMID:19025911

  5. A field theoretic generalization of Hajicek and Kuchar's quantization scheme in 3+1 canonical quantum gravity

    NASA Astrophysics Data System (ADS)

    Melas, Evangelos

    2011-07-01

    The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.

  6. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems II. Extension to the thermal infrared: equations and methods

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.

    2011-10-01

    In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.

  7. Cosmic-Ray Background Flux Model based on a Gamma-Ray Large-Area Space Telescope Balloon Flight Engineering Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizuno, T

    2004-09-03

    Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4{pi} sr), the sensitive energy range of the instrument ({approx} 10 MeV to 100 GeV) and abundant components (proton, alpha, e{sup -}, e{sup +}, {mu}{sup -}, {mu}{sup +} and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functionsmore » in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.« less

  8. Cosmic-Ray Background Flux Model Baed on a Gamma-Ray Large Area Space Telescope Baloon Flight Engineering

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.

  9. Advancing X-ray scattering metrology using inverse genetic algorithms.

    PubMed

    Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph

    2016-01-01

    We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.

  10. Sensitivity of Cirrus and Mixed-phase Clouds to the Ice Nuclei Spectra in McRAS-AC: Single Column Model Simulations

    NASA Technical Reports Server (NTRS)

    Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.

    2012-01-01

    The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.

  11. Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model

    NASA Astrophysics Data System (ADS)

    Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno

    2018-06-01

    A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.

  12. Finescale parameterizations of energy dissipation in a region of strong internal tides and sheared flow, the Lucky-Strike segment of the Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.

    2016-06-01

    The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.

  13. Field patterns: A new type of wave with infinitely degenerate band structure

    NASA Astrophysics Data System (ADS)

    Mattei, Ornella; Milton, Graeme W.

    2017-12-01

    Field pattern materials (FP-materials) are space-time composites with PT-symmetry in which the one-dimensional-spatial distribution of the constituents changes in time in such a special manner to give rise to a new type of waves, which we call field pattern waves (FP-waves) (MILTON G. W. and MATTEI O., Proc. R. Soc. A, 473 (2017) 20160819; MATTEI O. and MILTON G. W., New J. Phys., 19 (2017) 093022). Specifically, due to the special periodic space-time geometry of these materials, when an instantaneous disturbance propagates through the system, the branching of the characteristic lines at the space-time interfaces between phases does not lead to a chaotic cascade of disturbances but concentrates on an orderly pattern of disturbances: this is the field pattern. In this letter, by applying Bloch-Floquet theory, we show that the dispersion diagrams associated with these FP-materials are infinitely degenerate: associated with each point on the dispersion diagram is an infinite space of Bloch functions. Each generalized function is concentrated on a specific field pattern, each parameterized by a variable that we call the launch parameter. The dynamics separates into independent dynamics on the different field patterns, each with the same dispersion relation.

  14. The Grell-Freitas Convection Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Technical Reports Server (NTRS)

    Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.

    2017-01-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.

  15. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  16. Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.

    2014-12-01

    Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.

  17. Effective degrees of freedom: a flawed metaphor

    PubMed Central

    Janson, Lucas; Fithian, William; Hastie, Trevor J.

    2015-01-01

    Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114

  18. Homogeneous solutions of stationary Navier-Stokes equations with isolated singularities on the unit sphere. II. Classification of axisymmetric no-swirl solutions

    NASA Astrophysics Data System (ADS)

    Li, Li; Li, YanYan; Yan, Xukai

    2018-05-01

    We classify all (- 1)-homogeneous axisymmetric no-swirl solutions of incompressible stationary Navier-Stokes equations in three dimension which are smooth on the unit sphere minus the south and north poles, parameterizing them as a four dimensional surface with boundary in appropriate function spaces. Then we establish smoothness properties of the solution surface in the four parameters. The smoothness properties will be used in a subsequent paper where we study the existence of (- 1)-homogeneous axisymmetric solutions with non-zero swirl on S2 ∖ { S , N }, emanating from the four dimensional solution surface.

  19. MISSE-7 MESA Miniaturized Electrostatic Analyzer - Ion Spectra Analysis Preliminary Results

    NASA Astrophysics Data System (ADS)

    Enloe, C. L.; Balthazor, R. L.; McHarg, M. G.; Clark, A. L.; Waite, D.; Wallerstein, A. J.; Wilson, K. A.

    2011-12-01

    The 7th Materials on the International Space Station Experiment (MISSE-7) was launched in November 2009 and retrieved on STS-134 in April 2011. One of the onboard experiments, the Miniaturized Electrostatic Analyzer (MESA), is a small low-cost low-size/weight/power ion and electron spectrometer that was pointed into ram during the majority of the time onboard. Over 800 Mb of data has been obtained by taking spectra every three minutes on-orbit. The data has been analyzed with a novel "parameterizing the parameters" method suitable for on-orbit data analysis using low-cost microcontrollers. Preliminary results are shown.

  20. Production of Pions in pA-collisions

    NASA Technical Reports Server (NTRS)

    Moskalenko, I. V.; Mashnik, S. G.

    2003-01-01

    Accurate knowledge of pion production cross section in PA-collisions is of interest for astrophysics, CR physics, and space radiation studies. Meanwhile, pion production in pA-reactions is often accounted for by simple scaling of that for pp-collisions, which is not enough for many real applications. We evaluate the quality of existing parameterizations using the data and simulations with the Los Alamos version of the Quark-Gluon String Model code LAQGSM and the improved Cascade-Exciton Model code CEM2k. The LAQGSM and CEM2k models have been shown to reproduce well nuclear reactions and hadronic data in the range 0.01-800 GeV/nucleon.

  1. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    NASA Technical Reports Server (NTRS)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); hide

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  2. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  3. Black branes and black strings in the astrophysical and cosmological context

    NASA Astrophysics Data System (ADS)

    Akarsu, Özgür; Chopovsky, Alexey; Zhuk, Alexander

    2018-03-01

    We consider Kaluza-Klein models where internal spaces are compact flat or curved Einstein spaces. This background is perturbed by a compact gravitating body with the dust-like equation of state (EoS) in the external/our space and an arbitrary EoS parameter Ω in the internal space. Without imposing any restrictions on the form of the perturbed metric and the distribution of the perturbed energy densities, we perform the general analysis of the Einstein and conservation equations in the weak-field limit. All conclusions follow from this analysis. For example, we demonstrate that the perturbed model is static and perturbed metric preserves the block-diagonal form. In a particular case Ω = - 1 / 2, the found solution corresponds to the weak-field limit of the black strings/branes. The black strings/branes are compact gravitating objects which have the topology (four-dimensional Schwarzschild spacetime) × (d-dimensional internal space) with d ≥ 1. We present the arguments in favour of these objects. First, they satisfy the gravitational tests for the parameterized post-Newtonian parameter γ at the same level of accuracy as General Relativity. Second, they are preferable from the thermodynamical point of view. Third, averaging over the Universe, they do not destroy the stabilization of the internal space. These are the astrophysical and cosmological aspects of the black strings/branes.

  4. Effects of Land Surface Heterogeneity on Simulated Boundary-Layer Structures from the LES to the Mesoscale

    NASA Astrophysics Data System (ADS)

    Poll, Stefan; Shrestha, Prabhakar; Simmer, Clemens

    2017-04-01

    Land heterogeneity influences the atmospheric boundary layer (ABL) structure including organized (secondary) circulations which feed back on land-atmosphere exchange fluxes. Especially the latter effects cannot be incorporated explicitly in regional and climate models due to their coarse computational spatial grids, but must be parameterized. Current parameterizations lead, however, to uncertainties in modeled surface fluxes and boundary layer evolution, which feed back to cloud initiation and precipitation. This study analyzes the impact of different horizontal grid resolutions on the simulated boundary layer structures in terms of stability, height and induced secondary circulations. The ICON-LES (Icosahedral Nonhydrostatic in LES mode) developed by the MPI-M and the German weather service (DWD) and conducted within the framework of HD(CP)2 is used. ICON is dynamically downscaled through multiple scales of 20 km, 7 km, 2.8 km, 625 m, 312 m, and 156 m grid spacing for several days over Germany and partial neighboring countries for different synoptic conditions. We examined the entropy spectrum of the land surface heterogeneity at these grid resolutions for several locations close to measurement sites, such as Lindenberg, Jülich, Cabauw and Melpitz, and studied its influence on the surface fluxes and the evolution of the boundary layer profiles.

  5. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  6. Gsflow-py: An integrated hydrologic model development tool

    NASA Astrophysics Data System (ADS)

    Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.

    2017-12-01

    Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.

  7. The Role of Law-of-the-Wall and Roughness Scale in the Surface Stress Model for LES of the Rough-wall Boundary Layer

    NASA Astrophysics Data System (ADS)

    Brasseur, James; Paes, Paulo; Chamecki, Marcelo

    2017-11-01

    Large-eddy simulation (LES) of the high Reynolds number rough-wall boundary layer requires both a subfilter-scale model for the unresolved inertial term and a ``surface stress model'' (SSM) for space-time local surface momentum flux. Standard SSMs assume proportionality between the local surface shear stress vector and the local resolved-scale velocity vector at the first grid level. Because the proportionality coefficient incorporates a surface roughness scale z0 within a functional form taken from law-of-the-wall (LOTW), it is commonly stated that LOTW is ``assumed,'' and therefore ``forced'' on the LES. We show that this is not the case; the LOTW form is the ``drag law'' used to relate friction velocity to mean resolved velocity at the first grid level consistent with z0 as the height where mean velocity vanishes. Whereas standard SSMs do not force LOTW on the prediction, we show that parameterized roughness does not match ``true'' z0 when LOTW is not predicted, or does not exist. By extrapolating mean velocity, we show a serious mismatch between true z0 and parameterized z0 in the presence of a spurious ``overshoot'' in normalized mean velocity gradient. We shall discuss the source of the problem and its potential resolution.

  8. Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Canfield, Stephen

    2004-01-01

    The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.

  9. PROBING THE EXPANSION HISTORY OF THE UNIVERSE BY MODEL-INDEPENDENT RECONSTRUCTION FROM SUPERNOVAE AND GAMMA-RAY BURST MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Chao-Jun; Li, Xin-Zhou, E-mail: fengcj@shnu.edu.cn, E-mail: kychz@shnu.edu.cn

    To probe the late evolution history of the universe, we adopt two kinds of optimal basis systems. One of them is constructed by performing the principle component analysis, and the other is built by taking the multidimensional scaling approach. Cosmological observables such as the luminosity distance can be decomposed into these basis systems. These basis systems are optimized for different kinds of cosmological models that are based on different physical assumptions, even for a mixture model of them. Therefore, the so-called feature space that is projected from the basis systems is cosmological model independent, and it provides a parameterization for studying and reconstructing themore » Hubble expansion rate from the supernova luminosity distance and even gamma-ray burst (GRB) data with self-calibration. The circular problem when using GRBs as cosmological candles is naturally eliminated in this procedure. By using the Levenberg–Marquardt technique and the Markov Chain Monte Carlo method, we perform an observational constraint on this kind of parameterization. The data we used include the “joint light-curve analysis” data set that consists of 740 Type Ia supernovae and 109 long GRBs with the well-known Amati relation.« less

  10. Raman lidar and sun photometer measurements of aerosols and water vapor during the ARM RCS experiment

    NASA Technical Reports Server (NTRS)

    Ferrare, R. A.; Whiteman, D. N.; Melfi, S. H.; Evans, K. D.; Holben, B. N.

    1995-01-01

    The first Atmospheric Radiation Measurement (ARM) Remote Cloud Study (RCS) Intensive Operations Period (IOP) was held during April 1994 at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site near Lamont, Oklahoma. This experiment was conducted to evaluate and calibrate state-of-the-art, ground based remote sensing instruments and to use the data acquired by these instruments to validate retrieval algorithms developed under the ARM program. These activities are part of an overall plan to assess general circulation model (GCM) parameterization research. Since radiation processes are one of the key areas included in this parameterization research, measurements of water vapor and aerosols are required because of the important roles these atmospheric constituents play in radiative transfer. Two instruments were deployed during this IOP to measure water vapor and aerosols and study their relationship. The NASA/Goddard Space Flight Center (GSFC) Scanning Raman Lidar (SRL) acquired water vapor and aerosol profile data during 15 nights of operations. The lidar acquired vertical profiles as well as nearly horizontal profiles directed near an instrumented 60 meter tower. Aerosol optical thickness, phase function, size distribution, and integrated water vapor were derived from measurements with a multiband automatic sun and sky scanning radiometer deployed at this site.

  11. Impact of capturing rainfall scavenging intermittency using cloud superparameterization on simulated continental scale wildfire smoke transport

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Kooperman, G. J.; Zhao, Z.; Wang, M.; Russell, L. M.; Somerville, R. C.; Ghan, S. J.

    2011-12-01

    Evaluating the fidelity of new aerosol physics in climate models is confounded by uncertainties in source emissions, systematic error in cloud parameterizations, and inadequate sampling of long-range plume concentrations. To explore the degree to which cloud parameterizations distort aerosol processing and scavenging, the Pacific Northwest National Laboratory (PNNL) Aerosol-Enabled Multi-Scale Modeling Framework (AE-MMF), a superparameterized branch of the Community Atmosphere Model Version 5 (CAM5), is applied to represent the unusually active and well sampled North American wildfire season in 2004. In the AE-MMF approach, the evolution of double moment aerosols in the exterior global resolved scale is linked explicitly to convective statistics harvested from an interior cloud resolving scale. The model is configured in retroactive nudged mode to observationally constrain synoptic meteorology, and Arctic wildfire activity is prescribed at high space/time resolution using data from the Global Fire Emissions Database. Comparisons against standard CAM5 bracket the effect of superparameterization to isolate the role of capturing rainfall intermittency on the bulk characteristics of 2004 Arctic plume transport. Ground based lidar and in situ aircraft wildfire plume constraints from the International Consortium for Atmospheric Research on Transport and Transformation field campaign are used as a baseline for model evaluation.

  12. Root architecture simulation improves the inference from seedling root phenotyping towards mature root systems

    PubMed Central

    Zhao, Jiangsan; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A.; Nakhforoosh, Alireza

    2017-01-01

    Abstract Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. PMID:28168270

  13. Modeling particle nucleation and growth over northern California during the 2010 CARES campaign

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.

    2015-11-01

    Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.

  14. Application of new parameterizations of gas transfer velocity and their impact on regional and global marine CO 2 budgets

    NASA Astrophysics Data System (ADS)

    Fangohr, Susanne; Woolf, David K.

    2007-06-01

    One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes independently. We then apply our parameterization to a monthly TOPEX altimeter gridded 1.5° × 1.5° data set and compare our results to transfer velocities calculated using the popular wind-based k parameterizations by Wanninkhof [Wanninkhof, R., 1992. Relationship between wind speed and gas exchange over the ocean. J. Geophys. Res., 97: 7373-7382.] and Wanninkhof and McGillis [Wanninkhof, R. and McGillis, W., 1999. A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13): 1889-1892]. We show that despite good agreement of the globally averaged transfer velocities, global and regional fluxes differ by up to 100%. These discrepancies are a result of different spatio-temporal distributions of the processes involved in the parameterizations of k, indicating the importance of wave field parameters and a need for further validation.

  15. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)

  16. Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations

    PubMed Central

    Nigh, Gordon

    2015-01-01

    Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472

  17. Constraints to Dark Energy Using PADE Parameterizations

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.

    2017-07-01

    We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.

  18. Electronegativity equalization method: parameterization and validation for organic molecules using the Merz-Kollman-Singh charge distribution scheme.

    PubMed

    Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav

    2009-05-01

    The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.

  19. Informing Carbon Dynamics in the Community Land Model with Observations from Across Timescales

    NASA Astrophysics Data System (ADS)

    Fox, A. M.; Hoar, T. J.

    2014-12-01

    Correct simulation of carbon dynamics in Earth System Models is required to accurately predict both short and long-term land carbon-cycle climate and concentration feedbacks. As new model structures and parameterizations of increasing complexity are introduced there is an ever present need for data to inform these developments, either indirectly through benchmarking activities, or directly through model-data fusion techniques. Here we briefly describe a very rich source of data that will come from the National Ecological Observatory Network (NEON), a continental-scale facility that will collect freely available biogeochemical and biophysical data from 60 sites representative of a full range of ecosystems across the USA over 30 years. Relevant data at each site include a full suite of micrometeorology measurements, profiles of CO2 and H2O vapor isotopes, soil temperature, moisture and CO2 flux, fine root images, and plot-based NPP, leaf area and litterfall estimates. This is accompanied by Lidar and hyperspectral derived biomass, leaf area and canopy chemistry at < 1m resolution of 100s km2. Critically, these observations are well calibrated and highly standardized across sites allowing comparisons, whilst plot and site selection has been designed to optimize representativeness and spatial scaling opportunities. To illustrate the potential utility of these data in constraining models, we show the range of Community Land Model (CLM) output at NEON site locations, and in model-space look at a number of different functional responses that characterize the model in space and time and could be tested with data. These observations can be used most directly through a data assimilation (DA) system and we demonstrate how we have developed support for CLM within the Data Assimilation Research Testbed (DART) that uses ensemble techniques for state estimation. Using an observing system experiment, we investigate how infrequent observations of carbon stocks constrain model dynamics and how these observations types can be used with more frequently available flux and leaf area index observations. We demonstrate the use of the latter with real Ameriflux and MODIS data.

  20. Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM

    NASA Technical Reports Server (NTRS)

    Yao, Mao-Sung; Cheng, Ye

    2013-01-01

    The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.

  1. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  2. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    NASA Technical Reports Server (NTRS)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  3. Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2015-12-01

    Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.

  4. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  5. Simulation of the Atmospheric Boundary Layer for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Marjanovic, Nikola

    Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).

  6. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  7. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  8. A New Canopy Integration Factor

    NASA Astrophysics Data System (ADS)

    Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.

    2017-12-01

    Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.

  9. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  10. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  11. Numerical modeling of space-time wave extremes using WAVEWATCH III

    NASA Astrophysics Data System (ADS)

    Barbariol, Francesco; Alves, Jose-Henrique G. M.; Benetazzo, Alvise; Bergamasco, Filippo; Bertotti, Luciana; Carniel, Sandro; Cavaleri, Luigi; Y. Chao, Yung; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro; Tolman, Hendrik

    2017-04-01

    A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261-2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the "Acqua Alta" oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).

  12. Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.

    2017-12-01

    The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.

  13. Evaluation of cloud-resolving model simulations of midlatitude cirrus with ARM and A-train observations

    DOE PAGES

    Muhlbauer, A.; Ackerman, T. P.; Lawson, R. P.; ...

    2015-07-14

    Cirrus clouds are ubiquitous in the upper troposphere and still constitute one of the largest uncertainties in climate predictions. Our paper evaluates cloud-resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration A-train satellites. The CRM simulations are driven with periodic boundary conditions and ARM forcing data, whereas the CSRM simulations are driven by the ERA-Interim product. Vertical profiles of temperature, relative humidity, and wind speeds are reasonably well simulated bymore » the CSRM and CRM, but there are remaining biases in the temperature, wind speeds, and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500 m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in general circulation models and in CSRM simulations with horizontal grid spacings on the order of 1 km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward the cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain, but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical, and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. But, there still is considerable need for reducing observational uncertainties and providing better observations especially for relative humidity and for the size distribution and chemical composition of aerosols in the upper troposphere.« less

  14. Revisiting the use of hyperdiffusivities in numerical dynamo models

    NASA Astrophysics Data System (ADS)

    Fournier, A.; Aubert, J.

    2012-04-01

    The groundbreaking numerical dynamo models of Glatzmaier & Roberts (1995) and Kuang & Bloxham (1997) received some criticism due to their use of hyperdiffusivities, whereby small scale processes artificially experience much stronger dissipation than large scale processes. This stronger dissipation they chose was anisotropic, in that it was only effective in the horizontal direction, and parameterized in spectral space using the following generic formula for any diffusive parameter ν ν(l) = ν0 ifl ≤ l0, ν(l) = ν0[1 + a(l- l0)n] ifl > l0, in which l is the spherical harmonic degree, ν0 is a reference value, l0 is the degree above which hyperdiffusivities start operating, and a and n are real numbers. Following the same choice as the studies mentioned above (which had most notably l0 = 0), Grote & Busse (2000) showed in a fully nonlinear context that the usage of hyperdiffusivities could lead to substantially different dynamics and magnetic field generation mechanisms. Without questioning the physical relevance of this parameterization of subgrid scale processes, we wish here to revisit the use of hyperdiffusivities (as defined mathematically above), on the account of the observation that today's models are run with a truncation at much larger spherical harmonic degree than early models. Consequently, they do not require hyperdiffusivities to kick in at the largest scales (l0 can be set to several tens). An exploration of those regions of parameter space less accessible to numerical models could therefore benefit from their use, provided they do not alter noticeably the largest scales of the dynamo (which are the ones expressing themselves in the record of the geomagnetic secular variation). We compare the statistics of a direct numerical simulation with the statistics of several hyperdiffusive simulations. In the prospect of exploring the parameter space and constructing statistics for their subsequent use for geomagnetic data assimilation practice, we conclude that a sensible use of hyperdiffusivities can lead to a much wanted decrease in computational cost, while not altering the nature of the solution.

  15. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  16. A scheme for parameterizing ice cloud water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  17. Illustration of microphysical processes in Amazonian deep convective clouds in the gamma phase space: introduction and potential applications

    NASA Astrophysics Data System (ADS)

    Cecchini, Micael A.; Machado, Luiz A. T.; Wendisch, Manfred; Costa, Anja; Krämer, Martina; Andreae, Meinrat O.; Afchine, Armin; Albrecht, Rachel I.; Artaxo, Paulo; Borrmann, Stephan; Fütterer, Daniel; Klimach, Thomas; Mahnke, Christoph; Martin, Scot T.; Minikin, Andreas; Molleker, Sergej; Pardo, Lianet H.; Pöhlker, Christopher; Pöhlker, Mira L.; Pöschl, Ulrich; Rosenfeld, Daniel; Weinzierl, Bernadett

    2017-12-01

    The behavior of tropical clouds remains a major open scientific question, resulting in poor representation by models. One challenge is to realistically reproduce cloud droplet size distributions (DSDs) and their evolution over time and space. Many applications, not limited to models, use the gamma function to represent DSDs. However, even though the statistical characteristics of the gamma parameters have been widely studied, there is almost no study dedicated to understanding the phase space of this function and the associated physics. This phase space can be defined by the three parameters that define the DSD intercept, shape, and curvature. Gamma phase space may provide a common framework for parameterizations and intercomparisons. Here, we introduce the phase space approach and its characteristics, focusing on warm-phase microphysical cloud properties and the transition to the mixed-phase layer. We show that trajectories in this phase space can represent DSD evolution and can be related to growth processes. Condensational and collisional growth may be interpreted as pseudo-forces that induce displacements in opposite directions within the phase space. The actually observed movements in the phase space are a result of the combination of such pseudo-forces. Additionally, aerosol effects can be evaluated given their significant impact on DSDs. The DSDs associated with liquid droplets that favor cloud glaciation can be delimited in the phase space, which can help models to adequately predict the transition to the mixed phase. We also consider possible ways to constrain the DSD in two-moment bulk microphysics schemes, in which the relative dispersion parameter of the DSD can play a significant role. Overall, the gamma phase space approach can be an invaluable tool for studying cloud microphysical evolution and can be readily applied in many scenarios that rely on gamma DSDs.

  18. Characterizing a proton beam scanning system for Monte Carlo dose calculation in patients

    NASA Astrophysics Data System (ADS)

    Grassberger, C.; Lomax, Anthony; Paganetti, H.

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low-energy electrons (<0.6 MeV for 230 MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of-field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5 mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations.

  19. Characterizing a Proton Beam Scanning System for Monte Carlo Dose Calculation in Patients

    PubMed Central

    Grassberger, C; Lomax, Tony; Paganetti, H

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low–energy electrons (<0.6MeV for 230MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations. PMID:25549079

  20. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.

  1. Parameterization of plume chemistry into large-scale atmospheric models: Application to aircraft NOx emissions

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.

    2009-10-01

    A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.

  2. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    NASA Astrophysics Data System (ADS)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.

  3. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2011-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.

  4. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2012-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.

  5. Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel

    2018-07-01

    This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.

  6. Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai

    In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less

  7. Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)

    DOE PAGES

    Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai

    2015-02-11

    In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less

  8. Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita

    NASA Astrophysics Data System (ADS)

    Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.

    2009-12-01

    Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions. We will use the geophysical model fields as input to instrument simulators to produce microwave brightness temperatures and radar reflectivity at the TRMM (TMI and PR) frequencies and polarizations. We will also simulate the surface backscattering cross-section at the QuikSCAT frequency, polarizations and viewing geometry. We will use satellite observations from TRMM and QuikSCAT to determine those parameterizations that yield a realistic forecast and those parameterizations that do not. To facilitate hurricane research, we have developed the JPL Tropical Cyclone Information System (TCIS), which includes a comprehensive set of multi-sensor observations relevant to large-scale and storm-scale processes in the atmosphere and the ocean. In this presentation, we will illustrate how the TCIS can be used for hurricane research. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  9. Testing the Hole-in-the-Pipe Model of nitric and nitrous oxide emissions from soils using the TRAGNET Database

    NASA Astrophysics Data System (ADS)

    Davidson, Eric A.; Verchot, Louis V.

    2000-12-01

    Because several soil properties and processes affect emissions of nitric oxide (NO) and nitrous oxide (N2O) from soils, it has been difficult to develop effective and robust algorithms to predict emissions of these gases in biogeochemical models. The conceptual "hole-in-the-pipe" (HIP) model has been used effectively to interpret results of numerous studies, but the ranges of climatic conditions and soil properties are often relatively narrow for each individual study. The Trace Gas Network (TRAGNET) database offers a unique opportunity to test the validity of one manifestation of the HIP model across a broad range of sites, including temperate and tropical climates, grasslands and forests, and native vegetation and agricultural crops. The logarithm of the sum of NO + N2O emissions was positively and significantly correlated with the logarithm of the sum of extractable soil NH4+ + NO3-. The logarithm of the ratio of NO:N2O emissions was negatively and significantly correlated with water-filled pore space (WFPS). These analyses confirm the applicability of the HIP model concept, that indices of soil N availability correlate with the sum of NO+N2O emissions, while soil water content is a strong and robust controller of the ratio of NO:N2O emissions. However, these parameterizations have only broad-brush accuracy because of unaccounted variation among studies in the soil depths where gas production occurs, where soil N and water are measured, and other factors. Although accurate predictions at individual sites may still require site-specific parameterization of these empirical functions, the parameterizations presented here, particularly the one for WFPS, may be appropriate for global biogeochemical modeling. Moreover, this integration of data sets demonstrates the broad ranging applicability of the HIP conceptual approach for understanding soil emissions of NO and N2O.

  10. Geocenter motion estimated from GRACE orbits: The impact of F10.7 solar flux

    NASA Astrophysics Data System (ADS)

    Tseng, Tzu-Pang; Hwang, Cheinway; Sośnica, Krzysztof; Kuo, Chung-Yen; Liu, Ya-Chi; Yeh, Wen-Hao

    2017-06-01

    We assess the impact of orbit modeling on the origin offsets between GRACE kinematic and reduced-dynamic orbits. The origin of the kinematic orbit is the center of IGS network (CN), whereas the origin of the reduced-dynamic orbit is assumed to be the center of mass of the Earth (CM). Theoretically, the origin offset between these two orbits is associated with the geocenter motion. However, the dynamic property of the reduced-dynamic orbit is highly related to orbit parameterizations. The assessment of the F10.7 impact on the geocenter motion is implemented by using different orbit parameterization setups in the reduced-dynamic method. We generate two types of reduced-dynamic orbits using 15 and 240 empirical parameters per day from 2005 to 2012. The empirical parameter used in Bernese GNSS Software is called piece-wise constant empirical acceleration (PCA) and is mainly to absorb the non-gravitational forces mostly related to the atmospheric drag and solar radiation pressure. The differences between kinematic and dynamic orbits can serve as a measurement for geocenter. The RMS value of the geocenter measurement in the 15-PCA case is approximately 3.5 cm and approximately 2 cm in the 240-PCA case. The correlation between the orbit difference and F10.7 is about 0.90 in the 15-PCA case and -0.10 to 0 in the 240-PCA case. This implies that the reduced-dynamic orbit modeled with 240 PCAs absorbs the F10.7 variation, which aliases to the 15-PCA orbit solution. The annual amplitudes of the geocenter motion are 3.1, 3.1 and 2.5 mm in the 15-PCA case, compared to 0.9, 2.0 and 1.3 mm in the 240-PCA case in the X, Y and Z components, respectively. The 15-PCA solution is thus closer to the geocenter motions derived from other space-geodetic techniques. The proposed method is limited to the parameterizations in the reduced-dynamic approach.

  11. Synthesizing 3D Surfaces from Parameterized Strip Charts

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri

    2004-01-01

    We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.

  12. Comparison of Measured and WRF-LES Turbulence Statistics in a Real Convective Boundary Layer over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Rai, R. K.; Berg, L. K.; Kosovic, B.; Mirocha, J. D.; Pekour, M. S.; Shaw, W. J.

    2015-12-01

    Resolving the finest turbulent scales present in the lower atmosphere using numerical simulations helps to study the processes that occur in the atmospheric boundary layer, such as the turbulent inflow condition to the wind plant and the generation of the wake behind wind turbines. This work employs several nested domains in the WRF-LES framework to simulate conditions in a convectively driven cloud free boundary layer at an instrumented field site in complex terrain. The innermost LES domain (30 m spatial resolution) receives the boundary forcing from two other coarser resolution LES outer domains, which in turn receive boundary conditions from two WRF-mesoscale domains. Wind and temperature records from sonic anemometers mounted at two vertical levels (30 m and 60 m) are compared with the LES results in term of first and second statistical moments as well as power spectra and distributions of wind velocity. For the two mostly used boundary layer parameterizations (MYNN and YSU) tested in the WRF mesoscale domains, the MYNN scheme shows slightly better agreement with the observations for some quantities, such as time averaged velocity and Turbulent Kinetic Energy (TKE). However, LES driven by WRF-mesoscale simulations using either parameterization have similar velocity spectra and distributions of velocity. For each component of the wind velocity, WRF-LES power spectra are found to be comparable to the spectra derived from the measured data (for the frequencies that are accurately represented by WRF-LES). Furthermore, the analysis of LES results shows a noticeable variability of the mean and variance even over small horizontal distances that would be considered sub-grid scale in mesoscale simulations. This observed statistical variability in space and time can be utilized to further analyze the turbulence quantities over a heterogeneous surface and to improve the turbulence parameterization in the mesoscale model.

  13. A hybrid Land Cover Dataset for Russia: a new methodology for merging statistics, remote sensing and in-situ information

    NASA Astrophysics Data System (ADS)

    Schepaschenko, D.; McCallum, I.; Shvidenko, A.; Kraxner, F.; Fritz, S.

    2009-04-01

    There is a critical need for accurate land cover information for resource assessment, biophysical modeling, greenhouse gas studies, and for estimating possible terrestrial responses and feedbacks to climate change. However, practically all existing land cover datasets have quite a high level of uncertainty and suffer from a lack of important details that does not allow for relevant parameterization, e.g., data derived from different forest inventories. The objective of this study is to develop a methodology in order to create a hybrid land cover dataset at the level which would satisfy requirements of the verified terrestrial biota full greenhouse gas account (Shvidenko et al., 2008) for large regions i.e. Russia. Such requirements necessitate a detailed quantification of land classes (e.g., for forests - dominant species, age, growing stock, net primary production, etc.) with additional information on uncertainties of the major biometric and ecological parameters in the range of 10-20% and a confidence interval of around 0.9. The approach taken here allows the integration of different datasets to explore synergies and in particular the merging and harmonization of land and forest inventories, ecological monitoring, remote sensing data and in-situ information. The following datasets have been integrated: Remote sensing: Global Land Cover 2000 (Fritz et al., 2003), Vegetation Continuous Fields (Hansen et al., 2002), Vegetation Fire (Sukhinin, 2007), Regional land cover (Schmullius et al., 2005); GIS: Soil 1:2.5 Mio (Dokuchaev Soil Science Institute, 1996), Administrative Regions 1:2.5 Mio, Vegetation 1:4 Mio, Bioclimatic Zones 1:4 Mio (Stolbovoi & McCallum, 2002), Forest Enterprises 1:2.5 Mio, Rivers/Lakes and Roads/Railways 1:1 Mio (IIASA's data base); Inventories and statistics: State Land Account (FARSC RF, 2006), State Forest Account - SFA (FFS RF, 2003), Disturbances in forests (FFS RF, 2006). The resulting hybrid land cover dataset at 1-km resolution comprises the following classes: Forest (each grid links to the SFA database, which contains 86,613 records); Agriculture (5 classes, parameterized by 89 administrative units); Wetlands (8 classes, parameterized by 83 zone/region units); Open Woodland, Burnt area; Shrub/grassland (50 classes, parameterized by 300 zone/region units); Water; Unproductive area. This study has demonstrated the ability to produce a highly detailed (both spatially and thematically) land cover dataset over Russia. Future efforts include further validation of the hybrid land cover dataset for Russia, and its use for assessment of the terrestrial biota full greenhouse gas budget across Russia. The methodology proposed in this study could be applied at the global level. Results of such an undertaking would however be highly dependent upon the quality of the available ground data. The implementation of the hybrid land cover dataset was undertaken in a way that it can be regularly updated based on new ground data and remote sensing products (ie. MODIS).

  14. Modeling particle nucleation and growth over northern California during the 2010 CARES campaign

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.

    2015-07-01

    Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4 while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapors parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates were predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. Differences among the three simulations for the 40-100 nm particle diameter range are mostly associated with the timing of the peak total tendencies that shift the morning increase and afternoon decrease in particle number concentration by up to two hours. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ∼ 36 %.

  15. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    NASA Astrophysics Data System (ADS)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf thickness Ltadj, LAI, and h (m). Its function is to translate leaf level estimates of diffuse absorption and backscatter to the canopy scale allowing the leaf optical properties to directly influence above canopy estimates of reflectance. The model was successfully modified and parameterized to operate in a canopy scale and a leaf scale mode. Canopy scale model simulations produced the best results. Simulations based on leaf derived coefficients produced calculated above canopy reflectance errors of 15% to 18%. A comprehensive sensitivity analyses indicated the most important parameters were beam to diffuse conversion c(lambda, m-1), diffuse absorption a(lambda, m-1), diffuse backscatter b(lambda, m-1), h (m), Q, and direct and diffuse irradiance. Sources of error include the estimation procedure for the direct beam to diffuse conversion and attenuation coefficients and other field and laboratory measurement and analysis errors. Applications of the model include creation of synthetic reflectance data sets for remote sensing algorithm development, simulations of stress and drought on vegetation reflectance signatures, and the potential to estimate leaf moisture and chemical status.

  16. The parameterization of the planetary boundary layer in the UCLA general circulation model - Formulation and results

    NASA Technical Reports Server (NTRS)

    Suarez, M. J.; Arakawa, A.; Randall, D. A.

    1983-01-01

    A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.

  17. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    DOE PAGES

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-02

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less

  18. Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.

    2013-02-01

    In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.

  19. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  20. Empirical parameterization of setup, swash, and runup

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.

    2006-01-01

    Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Hannah C.; Houze, Robert A.

    To equitably compare the spatial pattern of ice microphysical processes produced by three microphysical parameterizations with each other, observations, and theory, simulations of tropical oceanic mesoscale convective systems (MCSs) in the Weather Research and Forecasting (WRF) model were forced to develop the same mesoscale circulations as observations by assimilating radial velocity data from a Doppler radar. The same general layering of microphysical processes was found in observations and simulations with deposition anywhere above the 0°C level, aggregation at and above the 0°C level, melting at and below the 0°C level, and riming near the 0°C level. Thus, this study ismore » consistent with the layered ice microphysical pattern portrayed in previous conceptual models and indicated by dual-polarization radar data. Spatial variability of riming in the simulations suggests that riming in the midlevel inflow is related to convective-scale vertical velocity perturbations. Finally, this study sheds light on limitations of current generally available bulk microphysical parameterizations. In each parameterization, the layers in which aggregation and riming took place were generally too thick and the frequency of riming was generally too high compared to the observations and theory. Additionally, none of the parameterizations produced similar details in every microphysical spatial pattern. Discrepancies in the patterns of microphysical processes between parameterizations likely factor into creating substantial differences in model reflectivity patterns. It is concluded that improved parameterizations of ice-phase microphysics will be essential to obtain reliable, consistent model simulations of tropical oceanic MCSs.« less

  2. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donahue, Aaron S.; Caldwell, Peter M.

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less

  3. Straddling Interdisciplinary Seams: Working Safely in the Field, Living Dangerously With a Model

    NASA Astrophysics Data System (ADS)

    Light, B.; Roberts, A.

    2016-12-01

    Many excellent proposals for observational work have included language detailing how the proposers will appropriately archive their data and publish their results in peer-reviewed literature so that they may be readily available to the modeling community for parameterization development. While such division of labor may be both practical and inevitable, the assimilation of observational results and the development of observationally-based parameterizations of physical processes require care and feeding. Key questions include: (1) Is an existing parameterization accurate, consistent, and general? If not, it may be ripe for additional physics. (2) Do there exist functional working relationships between human modeler and human observationalist? If not, one or more may need to be initiated and cultivated. (3) If empirical observation and model development are a chicken/egg problem, how, given our lack of prescience and foreknowledge, can we better design observational science plans to meet the eventual demands of model parameterization? (4) Will the addition of new physics "break" the model? If so, then the addition may be imperative. In the context of these questions, we will make retrospective and forward-looking assessments of a now-decade-old numerical parameterization to treat the partitioning of solar energy at the Earth's surface where sea ice is present. While this so called "Delta-Eddington Albedo Parameterization" is currently employed in the widely-used Los Alamos Sea Ice Model (CICE) and appears to be standing the tests of accuracy, consistency, and generality, we will highlight some ideas for its ongoing development and improvement.

  4. Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations

    PubMed Central

    Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth

    2016-01-01

    Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173

  5. Statistical properties of the normalized ice particle size distribution

    NASA Astrophysics Data System (ADS)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000) parameterization. These new parameterizations are believed to better represent particle size at global scale, owing to a better representativity of the in situ microphysical database used to derive it. We then evaluated the potential of a direct N*0-Dm relationship. While the model parameterized by temperature produces strong errors on the cloud parameters, the N*0-Dm model parameterized by radar reflectivity produces accurate cloud parameters (less than 3% bias and 16% standard deviation). This result implies that the cloud parameters can be estimated from the estimate of only one parameter of the normalized PSD (N*0 or Dm) and a radar reflectivity measurement.

  6. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  7. The multifacet graphically contracted function method. II. A general procedure for the parameterization of orthogonal matrices and its application to arc factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely

    2014-08-14

    Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ R {sup m×n} in terms of the minimal number of essential parameters (φ). Both square n = m and rectangular n < m situations are examined. Two separate kinds of parameterizations are considered, one in which the individual columns of Q are distinct, and the other in which only Span(Q) is significant. The latter is relevant to chemical applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations aremore » represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and (φ) parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.« less

  8. Infrared radiation parameterizations for the minor CO2 bands and for several CFC bands in the window region

    NASA Technical Reports Server (NTRS)

    Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.

    1993-01-01

    Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.

  9. Effective Atomic Number, Mass Attenuation Coefficient Parameterization, and Implications for High-Energy X-Ray Cargo Inspection Systems

    NASA Astrophysics Data System (ADS)

    Langeveld, Willem G. J.

    The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.

  10. Parameterization of planetary wave breaking in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  11. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  12. Reduced order model of a blended wing body aircraft configuration

    NASA Astrophysics Data System (ADS)

    Stroscher, F.; Sika, Z.; Petersson, O.

    2013-12-01

    This paper describes the full development process of a numerical simulation model for the ACFA2020 (Active Control for Flexible 2020 Aircraft) blended wing body (BWB) configuration. Its requirements are the prediction of aeroelastic and flight dynamic response in time domain, with relatively small model order. Further, the model had to be parameterized with regard to multiple fuel filling conditions, as well as flight conditions. High efforts have been conducted in high-order aerodynamic analysis, for subsonic and transonic regime, by several project partners. The integration of the unsteady aerodynamic databases was one of the key issues in aeroelastic modeling.

  13. Mesoscale research activities with the LAMPS model

    NASA Technical Reports Server (NTRS)

    Kalb, M. W.

    1985-01-01

    Researchers achieved full implementation of the LAMPS mesoscale model on the Atmospheric Sciences Division computer and derived balanced and real wind initial states for three case studies: March 6, April 24, April 26, 1982. Numerical simulations were performed for three separate studies: (1) a satellite moisture data impact study using Vertical Atmospheric Sounder (VAS) precipitable water as a constraint on model initial state moisture analyses; (2) an evaluation of mesoscale model precipitation simulation accuracy with and without convective parameterization; and (3) the sensitivity of model precipitation to mesoscale detail of moisture and vertical motion in an initial state.

  14. Radar and microphysical characteristics of convective storms simulated from a numerical model using a new microphysical parameterization

    NASA Technical Reports Server (NTRS)

    Ferrier, Brad S.; Tao, Wei-Kuo; Simpson, Joanne

    1991-01-01

    The basic features of a new and improved bulk-microphysical parameterization capable of simulating the hydrometeor structure of convective systems in all types of large-scale environments (with minimal adjustment of coefficients) are studied. Reflectivities simulated from the model are compared with radar observations of an intense midlatitude convective system. Simulated reflectivities using the novel four-class ice scheme with a microphysical parameterization rain distribution at 105 min are illustrated. Preliminary results indicate that this new ice scheme works efficiently in simulating midlatitude continental storms.

  15. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  16. Quality by design: optimization of a freeze-drying cycle via design space in case of heterogeneous drying behavior and influence of the freezing protocol.

    PubMed

    Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Brayard, Philippe; Chouvenc, Pierre; Woinet, Bertrand

    2013-02-01

    This paper shows how to optimize the primary drying phase, for both product quality and drying time, of a parenteral formulation via design space. A non-steady state model, parameterized with experimentally determined heat and mass transfer coefficients, is used to define the design space when the heat transfer coefficient varies with the position of the vial in the array. The calculations recognize both equipment and product constraints, and also take into account model parameter uncertainty. Examples are given of cycles designed for the same formulation, but varying the freezing conditions and the freeze-dryer scale. These are then compared in terms of drying time. Furthermore, the impact of inter-vial variability on design space, and therefore on the optimized cycle, is addressed. With this regard, a simplified method is presented for the cycle design, which reduces the experimental effort required for the system qualification. The use of mathematical modeling is demonstrated to be very effective not only for cycle development, but also for solving problem of process transfer. This study showed that inter-vial variability remains significant when vials are loaded on plastic trays, and how inter-vial variability can be taken into account during process design.

  17. A measurement concept for hot-spot BRDFs from space

    NASA Technical Reports Server (NTRS)

    Gerstl, S.A.W.

    1996-01-01

    Several concepts for canopy hot-spot measurements from space have been investigated. The most promising involves active illumination and bistatic detection that would allow hot-spot angular distribution (BRDF) measurements from space in a search-light mode. The concept includes a pointable illumination source, such as a laser operating at an atmospheric window wavelength, coupled with a number of high spatial-resolution detectors that are clustered around the illumination source in space, receiving photons nearly coaxial with the reto-reflection direction. Microwave control and command among the satellite cluster would allow orienting the direction of the laser beam as well as the focusing detectors simultaneously so that the coupled system can function like a search light with almost unlimited pointing capabilities. The concept is called the Hot-Spot Search-Light (HSSL) satellite. A nominal satellite altitude of 600 km will allow hot-spot BRDF measurements out to about 18 degrees phase angle. The distributed are taking radiometric measurements of the intensity wings of the hot-spot angular distribution without the need for complex imaging detectors. The system can be operated at night for increased signal-to-noise ratio. This way the hot-spot angular signatures can be quantified and parameterized in sufficient detail to extract the biophysical information content of plant architectures.

  18. A comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-12-01

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.

  19. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.

  20. Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony

    2016-08-01

    The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.

  1. Parameterizing Coefficients of a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.

  2. Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.

    PubMed

    Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony

    2016-08-21

    The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.

  3. The global geochemistry of bomb-produced tritium - General circulation model compared to available observations and traditional interpretations

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Broecker, Wallace S.; Jouzel, Jean; Suozzo, Robert J.; Russell, Gary L.; Rind, David

    1989-01-01

    Observational evidence suggests that of the tritium produced during nuclear bomb tests that has already reached the ocean, more than twice as much arrived through vapor impact as through precipitation. In the present study, the Goddard Institute for Space Studies 8 x 10 deg atmospheric general circulation model is used to simulate tritium transport from the upper atmosphere to the ocean. The simulation indicates that tritium delivery to the ocean via vapor impact is about equal to that via precipitation. The model result is relatively insensitive to several imposed changes in tritium source location, in model parameterizations, and in model resolution. Possible reasons for the discrepancy are explored.

  4. Registration of cortical surfaces using sulcal landmarks for group analysis of MEG data☆

    PubMed Central

    Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.

    2010-01-01

    We present a method to register individual cortical surfaces to a surface-based brain atlas or canonical template using labeled sulcal curves as landmark constraints. To map one cortex smoothly onto another, we minimize a thin-plate spline energy defined on the surface by solving the associated partial differential equations (PDEs). By using covariant derivatives in solving these PDEs, we compute the bending energy with respect to the intrinsic geometry of the 3D surface rather than evaluating it in the flattened metric of the 2D parameter space. This covariant approach greatly reduces the confounding effects of the surface parameterization on the resulting registration. PMID:20824115

  5. Optimization of Composite Structures with Curved Fiber Trajectories

    NASA Astrophysics Data System (ADS)

    Lemaire, Etienne; Zein, Samih; Bruyneel, Michael

    2014-06-01

    This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.

  6. Microwave anisotropies in the light of the data from the COBE satellite

    NASA Technical Reports Server (NTRS)

    Dodelson, Scott; Jubas, Jay M.

    1993-01-01

    The recent measurement of anisotropies in the cosmic microwave background by the Cosmic Background Explorer (COBE) satellite and the recent South Pole experiment offer an excellent opportunity to probe cosmological theories. We test a class of theories in which the universe today is flat and matter dominated, and primordial perturbations are adiabatic parameterized by an index n. In this class of theories the predicted signal in the South Pole experiment depends on n, the Hubble constant, and the baryon density. For n = 1 a large region of this parameter space is ruled out, but there is still a window open which satisfies constraints from COBE, the South Pole experiment, and big bang nucleosynthesis.

  7. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  8. Assessment of band gaps for alkaline-earth chalcogenides using improved Tran Blaha-modified Becke Johnson potential

    NASA Astrophysics Data System (ADS)

    Yedukondalu, N.; Kunduru, Lavanya; Roshan, S. C. Rakesh; Sainath, M.

    2018-04-01

    Assessment of band gaps for nine alkaline-earth chalcogenides namely MX (M = Ca, Sr, Ba and X = S, Se Te) compounds are reported using Tran Blaha-modified Becke Johnson (TB-mBJ) potential and its new parameterization. From the computed electronic band structures at the equilibrium lattice constants, these materials are found to be indirect band gap semiconductors at ambient conditions. The calculated band gaps are improved using TB-mBJ and its new parameterization when compared to local density approximation (LDA) and Becke Johnson potentials. We also observe that TB-mBJ new parameterization for semiconductors below 7 eV reproduces the experimental trends very well for the small band gap semiconducting alkaline-earth chalcogenides. The calculated band profiles look similar for MX compounds (electronic band structures are provided for BaS for representation purpose) using LDA and new parameterization of TB-mBJ potentials.

  9. Cloud-radiation interactions and their parameterization in climate models

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.

  10. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  11. Effect of a sheared flow on iceberg motion and melting

    NASA Astrophysics Data System (ADS)

    FitzMaurice, A.; Straneo, F.; Cenedese, C.; Andres, M.

    2016-12-01

    Icebergs account for approximately half the freshwater flux into the ocean from the Greenland and Antarctic ice sheets and play a major role in the distribution of meltwater into the ocean. Global climate models distribute this freshwater by parameterizing iceberg motion and melt, but these parameterizations are presently informed by limited observations. Here we present a record of speed and draft for 90 icebergs from Sermilik Fjord, southeastern Greenland, collected in conjunction with wind and ocean velocity data over an 8 month period. It is shown that icebergs subject to strongly sheared flows predominantly move with the vertical average of the ocean currents. If, as typical in iceberg parameterizations, only the surface ocean velocity is taken into account, iceberg speed and basal melt may have errors in excess of 60%. These results emphasize the need for parameterizations to consider ocean properties over the entire iceberg draft.

  12. A review of recent research on improvement of physical parameterizations in the GLA GCM

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.

    1990-01-01

    A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.

  13. The Impact of Parameterized Convection on Climatological Precipitation in Atmospheric Global Climate Models

    NASA Astrophysics Data System (ADS)

    Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.

    2018-04-01

    Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.

  14. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. Themore » chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.« less

  15. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  16. A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"

    NASA Astrophysics Data System (ADS)

    Jansen, Malte F.

    2017-02-01

    This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.

  17. Final Technical Report for "Reducing tropical precipitation biases in CESM"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less

  18. Atmospheric solar heating rate in the water vapor bands

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah

    1986-01-01

    The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.

  19. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  20. The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel

    2017-03-01

    A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.

  1. Leaf chlorophyll constraint on model simulated gross primary productivity in agricultural systems

    NASA Astrophysics Data System (ADS)

    Houborg, Rasmus; McCabe, Matthew F.; Cescatti, Alessandro; Gitelson, Anatoly A.

    2015-12-01

    Leaf chlorophyll content (Chll) may serve as an observational proxy for the maximum rate of carboxylation (Vmax), which describes leaf photosynthetic capacity and represents the single most important control on modeled leaf photosynthesis within most Terrestrial Biosphere Models (TBMs). The parameterization of Vmax is associated with great uncertainty as it can vary significantly between plants and in response to changes in leaf nitrogen (N) availability, plant phenology and environmental conditions. Houborg et al. (2013) outlined a semi-mechanistic relationship between Vmax25 (Vmax normalized to 25 °C) and Chll based on inter-linkages between Vmax25, Rubisco enzyme kinetics, N and Chll. Here, these relationships are parameterized for a wider range of important agricultural crops and embedded within the leaf photosynthesis-conductance scheme of the Community Land Model (CLM), bypassing the questionable use of temporally invariant and broadly defined plant functional type (PFT) specific Vmax25 values. In this study, the new Chll constrained version of CLM is refined with an updated parameterization scheme for specific application to soybean and maize. The benefit of using in-situ measured and satellite retrieved Chll for constraining model simulations of Gross Primary Productivity (GPP) is evaluated over fields in central Nebraska, U.S.A between 2001 and 2005. Landsat-based Chll time-series records derived from the Regularized Canopy Reflectance model (REGFLEC) are used as forcing to the CLM. Validation of simulated GPP against 15 site-years of flux tower observations demonstrate the utility of Chll as a model constraint, with the coefficient of efficiency increasing from 0.91 to 0.94 and from 0.87 to 0.91 for maize and soybean, respectively. Model performances particularly improve during the late reproductive and senescence stage, where the largest temporal variations in Chll (averaging 35-55 μg cm-2 for maize and 20-35 μg cm-2 for soybean) are observed. While prolonged periods of vegetation stress did not occur over the studied fields, given the usefulness of Chll as an indicator of plant health, enhanced GPP predictabilities should be expected in fields exposed to longer periods of moisture and nutrient stress. While the results support the use of Chll as an observational proxy for Vmax25, future work needs to be directed towards improving the Chll retrieval accuracy from space observations and developing consistent and physically realistic modeling schemes that can be parameterized with acceptable accuracy over spatial and temporal domains.

  2. Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation

    EPA Science Inventory

    Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...

  3. Parameterization guidelines and considerations for hydrologic models

    Treesearch

     R. W. Malone; G. Yagow; C. Baffaut; M.W  Gitau; Z. Qi; Devendra Amatya; P.B.   Parajuli; J.V. Bonta; T.R.  Green

    2015-01-01

     Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...

  4. Parameterized cross sections for Coulomb dissociation in heavy-ion collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Cucinotta, F. A.; Townsend, L. W.; Badavi, F. F.

    1988-01-01

    Simple parameterizations of Coulomb dissociation cross sections for use in heavy-ion transport calculations are presented and compared to available experimental dissociation data. The agreement between calculation and experiment is satisfactory considering the simplicity of the calculations.

  5. Radiative flux and forcing parameterization error in aerosol-free clear skies.

    PubMed

    Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M

    2015-07-16

    Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.

  6. On the usage of classical nucleation theory in predicting the impact of bacteria on weather and climate

    NASA Astrophysics Data System (ADS)

    Sahyoun, Maher; Woetmann Nielsen, Niels; Havskov Sørensen, Jens; Finster, Kai; Bay Gosewinkel Karlson, Ulrich; Šantl-Temkiv, Tina; Smith Korsholm, Ulrik

    2014-05-01

    Bacteria, e.g. Pseudomonas syringae, have previously been found efficient in nucleating ice heterogeneously at temperatures close to -2°C in laboratory tests. Therefore, ice nucleation active (INA) bacteria may be involved in the formation of precipitation in mixed phase clouds, and could potentially influence weather and climate. Investigations into the impact of INA bacteria on climate have shown that emissions were too low to significantly impact the climate (Hoose et al., 2010). The goal of this study is to clarify the reason for finding the marginal impact on climate when INA bacteria were considered, by investigating the usability of ice nucleation rate parameterization based on classical nucleation theory (CNT). For this purpose, two parameterizations of heterogeneous ice nucleation were compared. Both parameterizations were implemented and tested in a 1-d version of the operational weather model (HIRLAM) (Lynch et al., 2000; Unden et al., 2002) in two different meteorological cases. The first parameterization is based on CNT and denoted CH08 (Chen et al., 2008). This parameterization is a function of temperature and the size of the IN. The second parameterization, denoted HAR13, was derived from nucleation measurements of SnomaxTM (Hartmann et al., 2013). It is a function of temperature and the number of protein complexes on the outer membranes of the cell. The fraction of cloud droplets containing each type of IN as percentage in the cloud droplets population were used and the sensitivity of cloud ice production in each parameterization was compared. In this study, HAR13 produces more cloud ice and precipitation than CH08 when the bacteria fraction increases. In CH08, the increase of the bacteria fraction leads to decreasing the cloud ice mixing ratio. The ice production using HAR13 was found to be more sensitive to the change of the bacterial fraction than CH08 which did not show a similar sensitivity. As a result, this may explain the marginal impact of IN bacteria in climate models when CH08 was used. The number of cell fragments containing proteins appears to be a more important parameter to consider than the size of the cell when parameterizing the heterogeneous freezing of bacteria.

  7. Unsupervised image matching based on manifold alignment.

    PubMed

    Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin

    2012-08-01

    This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.

  8. Fractal profit landscape of the stock market.

    PubMed

    Grönlund, Andreas; Yi, Il Gu; Kim, Beom Jun

    2012-01-01

    We investigate the structure of the profit landscape obtained from the most basic, fluctuation based, trading strategy applied for the daily stock price data. The strategy is parameterized by only two variables, p and q Stocks are sold and bought if the log return is bigger than p and less than -q, respectively. Repetition of this simple strategy for a long time gives the profit defined in the underlying two-dimensional parameter space of p and q. It is revealed that the local maxima in the profit landscape are spread in the form of a fractal structure. The fractal structure implies that successful strategies are not localized to any region of the profit landscape and are neither spaced evenly throughout the profit landscape, which makes the optimization notoriously hard and hypersensitive for partial or limited information. The concrete implication of this property is demonstrated by showing that optimization of one stock for future values or other stocks renders worse profit than a strategy that ignores fluctuations, i.e., a long-term buy-and-hold strategy.

  9. Preferences for tap water attributes within couples: An exploration of alternative mixed logit parameterizations

    NASA Astrophysics Data System (ADS)

    Scarpa, Riccardo; Thiene, Mara; Hensher, David A.

    2012-01-01

    Preferences for attributes of complex goods may differ substantially among members of households. Some of these goods, such as tap water, are jointly supplied at the household level. This issue of jointness poses a series of theoretical and empirical challenges to economists engaged in empirical nonmarket valuation studies. While a series of results have already been obtained in the literature, the issue of how to empirically measure these differences, and how sensitive the results are to choice of model specification from the same data, is yet to be clearly understood. In this paper we use data from a widely employed form of stated preference survey for multiattribute goods, namely choice experiments. The salient feature of the data collection is that the same choice experiment was applied to both partners of established couples. The analysis focuses on models that simultaneously handle scale as well as preference heterogeneity in marginal rates of substitution (MRS), thereby isolating true differences between members of couples in their MRS, by removing interpersonal variation in scale. The models employed are different parameterizations of the mixed logit model, including the willingness to pay (WTP)-space model and the generalized multinomial logit model. We find that in this sample there is some evidence of significant statistical differences in values between women and men, but these are of small magnitude and only apply to a few attributes.

  10. Non-rigid Reconstruction of Casting Process with Temperature Feature

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu

    2017-09-01

    Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.

  11. Root architecture simulation improves the inference from seedling root phenotyping towards mature root systems.

    PubMed

    Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A; Nakhforoosh, Alireza

    2017-02-01

    Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  12. Historical and projected carbon balance of mature black spruce ecosystems across north america: The role of carbon-nitrogen interactions

    USGS Publications Warehouse

    Clein, Joy S.; McGuire, A.D.; Zhang, X.; Kicklighter, D.W.; Melillo, J.M.; Wofsy, S.C.; Jarvis, P.G.; Massheder, J.M.

    2002-01-01

    The role of carbon (C) and nitrogen (N) interactions on sequestration of atmospheric CO2 in black spruce ecosystems across North America was evaluated with the Terrestrial Ecosystem Model (TEM) by applying parameterizations of the model in which C-N dynamics were either coupled or uncoupled. First, the performance of the parameterizations, which were developed for the dynamics of black spruce ecosystems at the Bonanza Creek Long-Term Ecological Research site in Alaska, were evaluated by simulating C dynamics at eddy correlation tower sites in the Boreal Ecosystem Atmosphere Study (BOREAS) for black spruce ecosystems in the northern study area (northern site) and the southern study area (southern site) with local climate data. We compared simulated monthly growing season (May to September) estimates of gross primary production (GPP), total ecosystem respiration (RESP), and net ecosystem production (NEP) from 1994 to 1997 to available field-based estimates at both sites. At the northern site, monthly growing season estimates of GPP and RESP for the coupled and uncoupled simulations were highly correlated with the field-based estimates (coupled: R2= 0.77, 0.88 for GPP and RESP; uncoupled: R2 = 0.67, 0.92 for GPP and RESP). Although the simulated seasonal pattern of NEP generally matched the field-based data, the correlations between field-based and simulated monthly growing season NEP were lower (R2 = 0.40, 0.00 for coupled and uncoupled simulations, respectively) in comparison to the correlations between field-based and simulated GPP and RESP. The annual NEP simulated by the coupled parameterization fell within the uncertainty of field-based estimates in two of three years. On the other hand, annual NEP simulated by the uncoupled parameterization only fell within the field-based uncertainty in one of three years. At the southern site, simulated NEP generally matched field-based NEP estimates, and the correlation between monthly growing season field-based and simulated NEP (R2 = 0.36, 0.20 for coupled and uncoupled simulations, respectively) was similar to the correlations at the northern site. To evaluate the role of N dynamics in C balance of black spruce ecosystems across North America, we simulated historical and projected C dynamics from 1900 to 2100 with a global-based climatology at 0.5?? resolution (latitude ?? longitude) with both the coupled and uncoupled parameterizations of TEM. From analyses at the northern site, several consistent patterns emerge. There was greater inter-annual variability in net primary production (NPP) simulated by the uncoupled parameterization as compared to the coupled parameterization, which led to substantial differences in inter-annual variability in NEP between the parameterizations. The divergence between NPP and heterotrophic respiration was greater in the uncoupled simulation, resulting in more C sequestration during the projected period. These responses were the result of fundamentally different responses of the coupled and uncoupled parameterizations to changes in CO2 and climate. Across North American black spruce ecosystems, the range of simulated decadal changes in C storage was substantially greater for the uncoupled parameterization than for the coupled parameterization. Analysis of the spatial variability in decadal responses of C dynamics revealed that C fluxes simulated by the coupled and uncoupled parameterizations have different sensitivities to climate and that the climate sensitivities of the fluxes change over the temporal scope of the simulations. The results of this study suggest that uncertainties can be reduced through (1) factorial studies focused on elucidating the role of C and N interactions in the response of mature black spruce ecosystems to manipulations of atmospheric CO2 and climate, (2) establishment of a network of continuous, long-term measurements of C dynamics across the range of mature black spruce ecosystems in North America, and (3) ancillary measureme

  13. Why different gas flux velocity parameterizations result in so similar flux results in the North Atlantic?

    NASA Astrophysics Data System (ADS)

    Piskozub, Jacek; Wróbel, Iwona

    2016-04-01

    The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations. The first one is the fact that most of the k functions intersect close to 9 m/s, the typical North Atlantic wind speeds. The squared and cubed function need to intersect in order to have similar global averages. This way the higher values of cubic functions for strong winds are offset by higher values of squared ones for weak ones. The wind speed of the intersection has to be higher than global wind speed average because discrepancies between different parameterizations increase with the wind speed. The North Atlantic region seem to have by chance just the right average wind speeds to make all the parameterizations resulting in similar annual fluxes. However there is a second reason for smaller inter-parameterization discrepancies in the North Atlantic than many other ocean basins. The North Atlantic CO2 fluxes are downward in every month. In many regions of the world, the direction of the flux changes between the winter and summer with wind speeds much stronger in the cold season. We show, using the actual formulas that in such a case the differences between the parameterizations partly cancel out which is not the case when the flux never changes its direction. Both the mechanisms accidentally make the North Atlantic an area where the choice of k parameterizations causes very small flux uncertainty in annual fluxes. On the other hand, it makes the North Atlantic data not very useful for choosing the parameterizations most closely representing real fluxes.

  14. Silhouette-Slice Theorems.

    DTIC Science & Technology

    1986-09-01

    necessary to define "canonical" * parameterizations. Examples of proposed parameterizations are Munge N...of a slice of the surface oriented along the vector CT on the surface is given by STr -(A4.24) 11 is clear from the above expression, that when a slice

  15. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    EPA Science Inventory

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  16. Parameterization of Transport and Period Matrices with X-Y Coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courant, E. D.

    A parameterization of 4x4 matrices describing linear beam transport systems has been obtained by Edwards and Teng. Here we extend their formalism to include dispersive effects, and give perscriptions for incorporating it in the program SYNCH.

  17. Accuracy of parameterized proton range models; A comparison

    NASA Astrophysics Data System (ADS)

    Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.

    2018-03-01

    An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.

  18. R-parametrization and its role in classification of linear multivariable feedback systems

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.

    1988-01-01

    A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.

  19. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less

  20. Parameterization of Cloud Optical Properties for a Mixture of Ice Particles for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.

Top