Sample records for exploring alternative parameterizations

  1. Exploring Alternate Parameterizations for Snowfall with Validation from Satellite and Terrestrial Radars

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.

    2009-01-01

    Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.

  2. Exploring Alternative Parameterizations for Snowfall with Validation from Satellite and Terrestrial Radars

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.

    2009-01-01

    Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. The combination of reliable cloud microphysics and radar reflectivity may constrain radiative transfer models used in satellite simulators during future missions, including EarthCARE and the NASA Global Precipitation Measurement. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a mid latitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.

  3. Parameterization of ALMANAC crop simulation model for non-irrigated dry bean in semi-arid temperate areas in Mexico

    USDA-ARS?s Scientific Manuscript database

    Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...

  4. Approximate Stokes Drift Profiles in Deep Water

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Janssen, Peter A. E. M.; Bidlot, Jean-Raymond

    2014-09-01

    A deep-water approximation to the Stokes drift velocity profile is explored as an alternative to the monochromatic profile. The alternative profile investigated relies on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons with parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profile gives a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. The alternative profile comes at no added numerical cost compared to the monochromatic profile.

  5. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  6. Age of high redshift objects—a litmus test for the dark energy models

    NASA Astrophysics Data System (ADS)

    Jain, Deepak; Dev, Abha

    2006-02-01

    The discovery of the quasar, the APM 08279+5255 at z=3.91 whose age is 2 3 Gyr has once again led to “age crisis”. The noticeable fact about this object is that it cannot be accommodated in a universe with Ω=0.27, currently accepted value of matter density parameter and ω=const. In this work, we explore the concordance of various dark energy parameterizations (w(z) models) with the age estimates of the old high redshift objects. It is alarming to note that the quasar cannot be accommodated in any dark energy model even for Ω=0.23, which corresponds to 1σ deviation below the best fit value provided by WMAP. There is a need to look for alternative cosmologies or some other dark energy parameterizations which allow the existence of the high redshift objects.

  7. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  8. Intercomparison of land-surface parameterizations launched

    NASA Astrophysics Data System (ADS)

    Henderson-Sellers, A.; Dickinson, R. E.

    One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.

  9. Climate and the equilibrium state of land surface hydrology parameterizations

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.

  10. Modelling and parameterizing the influence of tides on ice-shelf melt rates

    NASA Astrophysics Data System (ADS)

    Jourdain, N.; Molines, J. M.; Le Sommer, J.; Mathiot, P.; de Lavergne, C.; Gurvan, M.; Durand, G.

    2017-12-01

    Significant Antarctic ice sheet thinning is observed in several sectors of Antarctica, in particular in the Amundsen Sea sector, where warm circumpolar deep waters affect basal melting. The later has the potential to trigger marine ice sheet instabilities, with an associated potential for rapid sea level rise. It is therefore crucial to simulate and understand the processes associated with ice-shelf melt rates. In particular, the absence of tides representation in ocean models remains a caveat of numerous ocean hindcasts and climate projections. In the Amundsen Sea, tides are relatively weak and the melt-induced circulation is stronger than the tidal circulation. Using a regional 1/12° ocean model of the Amundsen Sea, we nonetheless find that tides can increase melt rates by up to 36% in some ice-shelf cavities. Among the processes that can possibly affect melt rates, the most important is an increased exchange at the ice/ocean interface resulting from the presence of strong tidal currents along the ice drafts. Approximately a third of this effect is compensated by a decrease in thermal forcing along the ice draft, which is related to an enhanced vertical mixing in the ocean interior in presence of tides. Parameterizing the effect of tides is an alternative to the representation of explicit tides in an ocean model, and has the advantage not to require any filtering of ocean model outputs. We therefore explore different ways to parameterize the effects of tides on ice shelf melt. First, we compare several methods to impose tidal velocities along the ice draft. We show that getting a realistic spatial distribution of tidal velocities in important, and can be deduced from the barotropic velocities of a tide model. Then, we explore several aspects of parameterized tidal mixing to reproduce the tide-induced decrease in thermal forcing along the ice drafts.

  11. Multiresponse modeling of variably saturated flow and isotope tracer transport for a hillslope experiment at the Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter

    2016-10-01

    This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.

  12. The sensitivity of WRF daily summertime simulations over West Africa to alternative parameterizations. Part 2: Precipitation.

    PubMed

    Noble, Erik; Druyan, Leonard M; Fulakeza, Matthew

    2016-01-01

    This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000-2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35-0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000-2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation.

  13. The sensitivity of WRF daily summertime simulations over West Africa to alternative parameterizations. Part 2: Precipitation

    PubMed Central

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2018-01-01

    This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000–2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35–0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000–2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation. PMID:29563651

  14. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    NASA Technical Reports Server (NTRS)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  15. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  16. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  17. Development and Exploration of a Regional Stormwater BMP Performance Database to Parameterize an Integrated Decision Support Tool (i-DST)

    NASA Astrophysics Data System (ADS)

    Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.

    2017-12-01

    Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.

  18. The Grell-Freitas Convection Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Technical Reports Server (NTRS)

    Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.

    2017-01-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.

  19. Statistical models of global Langmuir mixing

    NASA Astrophysics Data System (ADS)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  20. FY 2010 Fourth Quarter Report: Evaluation of the Dependency of Drizzle Formation on Aerosol Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, W; McGraw, R; Liu, Y

    Metric for Quarter 4: Report results of implementation of composite parameterization in single-column model (SCM) to explore the dependency of drizzle formation on aerosol properties. To better represent VOCALS conditions during a test flight, the Liu-Duam-McGraw (LDM) drizzle parameterization is implemented in the high-resolution Weather Research and Forecasting (WRF) model, as well as in the single-column Community Atmosphere Model (CAM), to explore this dependency.

  1. Exploring the potential of machine learning to break deadlock in convection parameterization

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  2. Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts

    NASA Astrophysics Data System (ADS)

    Wang, M.; Kamarianakis, Y.; Georgescu, M.

    2017-12-01

    A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.

  3. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  4. Alternate methodologies to experimentally investigate shock initiation properties of explosives

    NASA Astrophysics Data System (ADS)

    Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger

    2017-01-01

    Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.

  5. Approximate Stokes Drift Profiles and their use in Ocean Modelling

    NASA Astrophysics Data System (ADS)

    Breivik, Oyvind; Bidlot, Jea-Raymond; Janssen, Peter A. E. M.; Mogensen, Kristian

    2016-04-01

    Deep-water approximations to the Stokes drift velocity profile are explored as alternatives to the monochromatic profile. The alternative profiles investigated rely on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons against parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profiles give a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. Of the two Stokes drift profiles explored here, the profile based on the Phillips spectrum is by far the best. In particular, the shear near the surface is almost identical to that influenced by the f-5 tail of spectral wave models. The NEMO general circulation ocean model was recently extended to incorporate the Stokes-Coriolis force along with two other wave-related effects. The ECWMF coupled atmosphere-wave-ocean ensemble forecast system now includes these wave effects in the ocean model component (NEMO).

  6. Exploring JLA supernova data with improved flux-averaging technique

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-03-01

    In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  7. Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust

    NASA Astrophysics Data System (ADS)

    Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.

    2016-05-01

    Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.

  8. Ensemble superparameterization versus stochastic parameterization: A comparison of model uncertainty representation in tropical weather prediction

    NASA Astrophysics Data System (ADS)

    Subramanian, Aneesh C.; Palmer, Tim N.

    2017-06-01

    Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.Plain Language SummaryProbabilistic weather forecasts, especially for tropical weather, is still a significant challenge for global weather forecasting systems. Expressing uncertainty along with weather forecasts is important for informed decision making. Hence, we explore the use of a relatively new approach in using super-parameterization, where a cloud resolving model is embedded within a global model, in probabilistic tropical weather forecasts at medium range. We show that this approach helps improve modeling uncertainty in forecasts of certain features such as precipitation magnitude and location better, but forecasts of tropical winds are not necessarily improved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOS.A34C2665B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOS.A34C2665B"><span>Approximate Stokes Drift Profiles and their use in Ocean Modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Breivik, O.; Biblot, J.; Janssen, P. A. E. M.</p> <p>2016-02-01</p> <p>Deep-water approximations to the Stokes drift velocity profile are explored as alternatives to the monochromatic profile. The alternative profiles investigated rely on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons with parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profiles give a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. The NEMO general circulation ocean model was recently extended to incorporate the Stokes-Coriolis force along with two other wave-related effects. I will show some results from the coupled atmosphere-wave-ocean ensemble forecast system of ECMWF where these wave effects are now included in the ocean model component.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19...67S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19...67S"><span>Exploring Several Methods of Groundwater Model Selection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar</p> <p>2017-04-01</p> <p>Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22679968-exploring-jla-supernova-data-improved-flux-averaging-technique','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22679968-exploring-jla-supernova-data-improved-flux-averaging-technique"><span>Exploring JLA supernova data with improved flux-averaging technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn</p> <p>2017-03-01</p> <p>In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, wemore » discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011LNCS.6543..344K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011LNCS.6543..344K"><span>Alternative Parameterizations for Cluster Editing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Komusiewicz, Christian; Uhlmann, Johannes</p> <p></p> <p>Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010LNCS.6478..123F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010LNCS.6478..123F"><span>Parameterizing by the Number of Numbers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.</p> <p></p> <p>The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820015373','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820015373"><span>Alternatives for jet engine control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sain, M. K.</p> <p>1981-01-01</p> <p>Research centered on basic topics in the modeling and feedback control of nonlinear dynamical systems is reported. Of special interest were the following topics: (1) the role of series descriptions, especially insofar as they relate to questions of scheduling, in the control of gas turbine engines; (2) the use of algebraic tensor theory as a technique for parameterizing such descriptions; (3) the relationship between tensor methodology and other parts of the nonlinear literature; (4) the improvement of interactive methods for parameter selection within a tensor viewpoint; and (5) study of feedback gain representation as a counterpart to these modeling and parameterization ideas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9822371','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9822371"><span>High-resolution protein design with backbone freedom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Harbury, P B; Plecs, J J; Tidor, B; Alber, T; Kim, P S</p> <p>1998-11-20</p> <p>Recent advances in computational techniques have allowed the design of precise side-chain packing in proteins with predetermined, naturally occurring backbone structures. Because these methods do not model protein main-chain flexibility, they lack the breadth to explore novel backbone conformations. Here the de novo design of a family of alpha-helical bundle proteins with a right-handed superhelical twist is described. In the design, the overall protein fold was specified by hydrophobic-polar residue patterning, whereas the bundle oligomerization state, detailed main-chain conformation, and interior side-chain rotamers were engineered by computational enumerations of packing in alternate backbone structures. Main-chain flexibility was incorporated through an algebraic parameterization of the backbone. The designed peptides form alpha-helical dimers, trimers, and tetramers in accord with the design goals. The crystal structure of the tetramer matches the designed structure in atomic detail.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3729351','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3729351"><span>Simple liquid models with corrected dielectric constants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fennell, Christopher J.; Li, Libo; Dill, Ken A.</p> <p>2012-01-01</p> <p>Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012WRR....48.1520S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012WRR....48.1520S"><span>Preferences for tap water attributes within couples: An exploration of alternative mixed logit parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scarpa, Riccardo; Thiene, Mara; Hensher, David A.</p> <p>2012-01-01</p> <p>Preferences for attributes of complex goods may differ substantially among members of households. Some of these goods, such as tap water, are jointly supplied at the household level. This issue of jointness poses a series of theoretical and empirical challenges to economists engaged in empirical nonmarket valuation studies. While a series of results have already been obtained in the literature, the issue of how to empirically measure these differences, and how sensitive the results are to choice of model specification from the same data, is yet to be clearly understood. In this paper we use data from a widely employed form of stated preference survey for multiattribute goods, namely choice experiments. The salient feature of the data collection is that the same choice experiment was applied to both partners of established couples. The analysis focuses on models that simultaneously handle scale as well as preference heterogeneity in marginal rates of substitution (MRS), thereby isolating true differences between members of couples in their MRS, by removing interpersonal variation in scale. The models employed are different parameterizations of the mixed logit model, including the willingness to pay (WTP)-space model and the generalized multinomial logit model. We find that in this sample there is some evidence of significant statistical differences in values between women and men, but these are of small magnitude and only apply to a few attributes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=318307','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=318307"><span>Framework for parameterizing and validating APEX to support NTT</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The Agricultural Policy Environmental eXtender (APEX) is the process model underlying the Nutrient Tracking Tool (NTT). The NTT provides users, primarily farmers and crop consultants, with the opportunity to compare the effects of two scenarios, practice combinations, or other alternative conditions...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890064716&hterms=Hydrology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DHydrology','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890064716&hterms=Hydrology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DHydrology"><span>Land surface hydrology parameterization for atmospheric general circulation models including subgrid scale spatial variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Entekhabi, D.; Eagleson, P. S.</p> <p>1989-01-01</p> <p>Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT........81D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT........81D"><span>Studies into the nature of cosmic acceleration: Dark energy or a modification to gravity on cosmological scales</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dossett, Jason Nicholas</p> <p></p> <p>Since its discovery more than a decade ago, the problem of cosmic acceleration has become one of the largest in cosmology and physics as a whole. An unknown dark energy component of the universe is often invoked to explain this observation. Mathematically, this works because inserting a cosmic fluid with a negative equation of state into Einstein's equations provides an accelerated expansion. There are, however, alternative explanations for the observed cosmic acceleration. Perhaps the most promising of the alternatives is that, on the very largest cosmological scales, general relativity needs to be extended or a new, modified gravity theory must be used. Indeed, many modified gravity models are not only able to replicate the observed accelerated expansion without dark energy, but are also more compatible with a unified theory of physics. Thus it is the goal of this dissertation to develop and study robust tests that will be able to distinguish between these alternative theories of gravity and the need for a dark energy component of the universe. We will study multiple approaches using the growth history of large-scale structure in the universe as a way to accomplish this task. These approaches include studying what is known as the growth index parameter, a parameter that describes the logarithmic growth rate of structure in the universe, which describes the rate of formation of clusters and superclusters of galaxies over the entire age of the universe. We will explore the effectiveness of this parameter to distinguish between general relativity and modifications to gravity physics given realistic expectations of results from future experiments. Next, we will explore the modified growth formalism wherein deviations from the growth expected in general relativity are parameterized via changes to the growth equations, i.e. the perturbed Einstein's equations. We will also explore the impact of spatial curvature on these tests. Finally, we will study how dark energy with some unusual properties will affect the conclusiveness of these tests.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li class="active"><span>2</span></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_2 --> <div id="page_3" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="41"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20090031833&hterms=models+linear&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dmodels%2Blinear','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20090031833&hterms=models+linear&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dmodels%2Blinear"><span>On the Development of Parameterized Linear Analytical Longitudinal Airship Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.</p> <p>2008-01-01</p> <p>In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23545956','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23545956"><span>Generalized ocean color inversion model for retrieving marine inherent optical properties.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Werdell, P Jeremy; Franz, Bryan A; Bailey, Sean W; Feldman, Gene C; Boss, Emmanuel; Brando, Vittorio E; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J; Lee, ZhongPing; Loisel, Hubert; Maritorena, Stéphane; Mélin, Fréderic; Moore, Timothy S; Smyth, Timothy J; Antoine, David; Devred, Emmanuel; d'Andon, Odile Hembise Fanton; Mangin, Antoine</p> <p>2013-04-01</p> <p>Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130014369','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130014369"><span>Generalized Ocean Color Inversion Model for Retrieving Marine Inherent Optical Properties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.; Feldman, Gene C.; Boss, Emmanuel; Brando, Vittorio E.; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J.; Lee, ZhongPing; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20130014369'); toggleEditAbsImage('author_20130014369_show'); toggleEditAbsImage('author_20130014369_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20130014369_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20130014369_hide"></p> <p>2013-01-01</p> <p>Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future ensemble applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OcMod.127....1B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OcMod.127....1B"><span>Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berloff, Pavel</p> <p>2018-07-01</p> <p>This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=307492','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=307492"><span>A soil moisture accounting-procedure with a Richards' equation-based soil texture-dependent parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Given a time series of potential evapotranspiration and rainfall data, there are at least two approaches for estimating vertical percolation rates. One approach involves solving Richards' equation (RE) with a plant uptake model. An alternative approach involves applying a simple soil moisture accoun...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030093641','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030093641"><span>Attitude Estimation or Quaternion Estimation?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Markley, F. Landis</p> <p>2003-01-01</p> <p>The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70135102','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70135102"><span>Vertical structure of mean cross-shore currents across a barred surf zone</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Haines, John W.; Sallenger, Asbury H.</p> <p>1994-01-01</p> <p>Mean cross-shore currents observed across a barred surf zone are compared to model predictions. The model is based on a simplified momentum balance with a turbulent boundary layer at the bed. Turbulent exchange is parameterized by an eddy viscosity formulation, with the eddy viscosity Aυ independent of time and the vertical coordinate. Mean currents result from gradients due to wave breaking and shoaling, and the presence of a mean setup of the free surface. Descriptions of the wave field are provided by the wave transformation model of Thornton and Guza [1983]. The wave transformation model adequately reproduces the observed wave heights across the surf zone. The mean current model successfully reproduces the observed cross-shore flows. Both observations and predictions show predominantly offshore flow with onshore flow restricted to a relatively thin surface layer. Successful application of the mean flow model requires an eddy viscosity which varies horizontally across the surf zone. Attempts are made to parameterize this variation with some success. The data does not discriminate between alternative parameterizations proposed. The overall variability in eddy viscosity suggested by the model fitting should be resolvable by field measurements of the turbulent stresses. Consistent shortcomings of the parameterizations, and the overall modeling effort, suggest avenues for further development and data collection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1239781-improving-parameterization-entrainment-rate-shallow-convection-aircraft-measurements-large-eddy-simulation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1239781-improving-parameterization-entrainment-rate-shallow-convection-aircraft-measurements-large-eddy-simulation"><span>Improving Parameterization of Entrainment Rate for Shallow Convection with Aircraft Measurements and Large-Eddy Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; ...</p> <p>2016-02-01</p> <p>This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880038015&hterms=Geomagnetic+reversal&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DGeomagnetic%2Breversal','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880038015&hterms=Geomagnetic+reversal&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DGeomagnetic%2Breversal"><span>Thermospheric dynamics during November 21-22, 1981 - Dynamics Explorer measurements and thermospheric general circulation model predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Roble, R. G.; Killeen, T. L.; Spencer, N. W.; Heelis, R. A.; Reiff, P. H.</p> <p>1988-01-01</p> <p>Time-dependent aurora and magnetospheric convection parameterizations have been derived from solar wind and aurora particle data for November 21-22, 1981, and are used to drive the auroral and magnetospheric convection models that are embedded in the National Center for Atmospheric Research thermospheric general circulation model (TGCM). Neutral wind speeds and transition boundaries between the midlatitude solar-driven circulation and the high-latitude magnetospheric convection-driven circulation are examined on an orbit-by-orbit basis. The results show that TGCM-calculated winds and reversal boundary locations are in generally good agreement with Dynamics Explorer 2 measurements for the orbits studied. This suggests that, at least for this particular period of relatively moderate geomagnetic activity, the TGCM parameterizations on the eveningside of the auroral oval and polar cap are adequate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JGRD..116.0T02L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JGRD..116.0T02L"><span>Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail</p> <p>2011-01-01</p> <p>Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhA...49B4001Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhA...49B4001Y"><span>Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yano, Jun-Ichi</p> <p>2016-07-01</p> <p>Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4870607','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4870607"><span>Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.</p> <p>2016-01-01</p> <p>Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApJ...850...52P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApJ...850...52P"><span>The Circumstellar Disk HD 169142: Gas, Dust, and Planets Acting in Concert?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pohl, A.; Benisty, M.; Pinilla, P.; Ginski, C.; de Boer, J.; Avenhaus, H.; Henning, Th.; Zurlo, A.; Boccaletti, A.; Augereau, J.-C.; Birnstiel, T.; Dominik, C.; Facchini, S.; Fedele, D.; Janson, M.; Keppler, M.; Kral, Q.; Langlois, M.; Ligi, R.; Maire, A.-L.; Ménard, F.; Meyer, M.; Pinte, C.; Quanz, S. P.; Sauvage, J.-F.; Sezestre, É.; Stolker, T.; Szulágyi, J.; van Boekel, R.; van der Plas, G.; Villenave, M.; Baruffolo, A.; Baudoz, P.; Le Mignant, D.; Maurel, D.; Ramos, J.; Weber, L.</p> <p>2017-11-01</p> <p>HD 169142 is an excellent target for investigating signs of planet-disk interaction due to previous evidence of gap structures. We perform J-band (˜1.2 μm) polarized intensity imaging of HD 169142 with VLT/SPHERE. We observe polarized scattered light down to 0.″16 (˜19 au) and find an inner gap with a significantly reduced scattered-light flux. We confirm the previously detected double-ring structure peaking at 0.″18 (˜21 au) and 0.″56 (˜66 au) and marginally detect a faint third gap at 0.″70-0.″73 (˜82-85 au). We explore dust evolution models in a disk perturbed by two giant planets, as well as models with a parameterized dust size distribution. The dust evolution model is able to reproduce the ring locations and gap widths in polarized intensity but fails to reproduce their depths. However, it gives a good match with the ALMA dust continuum image at 1.3 mm. Models with a parameterized dust size distribution better reproduce the gap depth in scattered light, suggesting that dust filtration at the outer edges of the gaps is less effective. The pileup of millimeter grains in a dust trap and the continuous distribution of small grains throughout the gap likely require more efficient dust fragmentation and dust diffusion in the dust trap. Alternatively, turbulence or charging effects might lead to a reservoir of small grains at the surface layer that is not affected by the dust growth and fragmentation cycle dominating the dense disk midplane. The exploration of models shows that extracting planet properties such as mass from observed gap profiles is highly degenerate. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO program 095.C-0273.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A12A..05B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A12A..05B"><span>The cloud-phase feedback in the Super-parameterized Community Earth System Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Burt, M. A.; Randall, D. A.</p> <p>2016-12-01</p> <p>Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A31E2240F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A31E2240F"><span>The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Freitas, S.; Grell, G. A.; Molod, A.</p> <p>2017-12-01</p> <p>We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CQGra..34f5003S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CQGra..34f5003S"><span>Parameterized post-Newtonian cosmology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanghai, Viraj A. A.; Clifton, Timothy</p> <p>2017-03-01</p> <p>Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..554..758L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..554..758L"><span>Comparative study of transient hydraulic tomography with varying parameterizations and zonations: Laboratory sandbox investigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.</p> <p>2017-11-01</p> <p>Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140009603','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140009603"><span>A Comparison of Perturbed Initial Conditions and Multiphysics Ensembles in a Severe Weather Episode in Spain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain</p> <p>2012-01-01</p> <p>Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007ACPD....7.1655C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007ACPD....7.1655C"><span>A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cariolle, D.; Teyssèdre, H.</p> <p>2007-01-01</p> <p>This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present parameterization gives an interesting alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29377769','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29377769"><span>Using Epidemiological Principles to Explain Fungicide Resistance Management Tactics: Why do Mixtures Outperform Alternations?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elderfield, James A D; Lopez-Ruiz, Francisco J; van den Bosch, Frank; Cunniffe, Nik J</p> <p>2018-07-01</p> <p>Whether fungicide resistance management is optimized by spraying chemicals with different modes of action as a mixture (i.e., simultaneously) or in alternation (i.e., sequentially) has been studied by experimenters and modelers for decades. However, results have been inconclusive. We use previously parameterized and validated mathematical models of wheat Septoria leaf blotch and grapevine powdery mildew to test which tactic provides better resistance management, using the total yield before resistance causes disease control to become economically ineffective ("lifetime yield") to measure effectiveness. We focus on tactics involving the combination of a low-risk and a high-risk fungicide, and the case in which resistance to the high-risk chemical is complete (i.e., in which there is no partial resistance). Lifetime yield is then optimized by spraying as much low-risk fungicide as is permitted, combined with slightly more high-risk fungicide than needed for acceptable initial disease control, applying these fungicides as a mixture. That mixture rather than alternation gives better performance is invariant to model parameterization and structure, as well as the pathosystem in question. However, if comparison focuses on other metrics, e.g., lifetime yield at full label dose, either mixture or alternation can be optimal. Our work shows how epidemiological principles can explain the evolution of fungicide resistance, and also highlights a theoretical framework to address the question of whether mixture or alternation provides better resistance management. It also demonstrates that precisely how spray tactics are compared must be given careful consideration. [Formula: see text] Copyright © 2018 The Author(s). This is an open access article distributed under the CC BY 4.0 International license .</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_1");'>1</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li class="active"><span>3</span></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_3 --> <div id="page_4" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="61"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70156183','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70156183"><span>Modeling apple snail population dynamics on the Everglades landscape</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Darby, Phil; DeAngelis, Donald L.; Romañach, Stephanie; Suir, Kevin J.; Bridevaux, Joshua L.</p> <p>2015-01-01</p> <p>Comparisons of model output to empirical data indicate the need for more data to better understand, and eventually parameterize, several aspects of snail ecology in support of EverSnail. A primary value of EverSnail is its capacity to describe the relative response of snail abundance to alternative hydrologic scenarios considered for Everglades water management and restoration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1618481','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1618481"><span>Dynamic allostery of protein alpha helical coiled-coils</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hawkins, Rhoda J; McLeish, Tom C.B</p> <p>2005-01-01</p> <p>Alpha helical coiled-coils appear in many important allosteric proteins such as the dynein molecular motor and bacteria chemotaxis transmembrane receptors. As a mechanism for transmitting the information of ligand binding to a distant site across an allosteric protein, an alternative to conformational change in the mean static structure is an induced change in the pattern of the internal dynamics of the protein. We explore how ligand binding may change the intramolecular vibrational free energy of a coiled-coil, using parameterized coarse-grained models, treating the case of dynein in detail. The models predict that coupling of slide, bend and twist modes of the coiled-coil transmits an allosteric free energy of ∼2kBT, consistent with experimental results. A further prediction is a quantitative increase in the effective stiffness of the coiled-coil without any change in inherent flexibility of the individual helices. The model provides a possible and experimentally testable mechanism for transmission of information through the alpha helical coiled-coil of dynein. PMID:16849225</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080000345','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080000345"><span>Orion Orbit Control Design and Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jackson, Mark; Gonzalez, Rodolfo; Sims, Christopher</p> <p>2007-01-01</p> <p>The analysis of candidate thruster configurations for the Crew Exploration Vehicle (CEV) is presented. Six candidate configurations were considered for the prime contractor baseline design. The analysis included analytical assessments of control authority, control precision, efficiency and robustness, as well as simulation assessments of control performance. The principles used in the analytic assessments of controllability, robustness and fuel performance are covered and results provided for the configurations assessed. Simulation analysis was conducted using a pulse width modulated, 6 DOF reaction system control law with a simplex-based thruster selection algorithm. Control laws were automatically derived from hardware configuration parameters including thruster locations, directions, magnitude and specific impulse, as well as vehicle mass properties. This parameterized controller allowed rapid assessment of multiple candidate layouts. Simulation results are presented for final phase rendezvous and docking, as well as low lunar orbit attitude hold. Finally, on-going analysis to consider alternate Service Module designs and to assess the pilot-ability of the baseline design are discussed to provide a status of orbit control design work to date.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17719076','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17719076"><span>Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thorndahl, S; Willems, P</p> <p>2008-01-01</p> <p>Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1812502P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1812502P"><span>A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pilz, Tobias; Francke, Till; Bronstert, Axel</p> <p>2016-04-01</p> <p>Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4397067','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4397067"><span>A General Framework for Thermodynamically Consistent Parameterization and Efficient Sampling of Enzymatic Reactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Saa, Pedro; Nielsen, Lars K.</p> <p>2015-01-01</p> <p>Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1024076-parameterizing-correlations-between-hydrometeor-species-mixed-phase-arctic-clouds','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1024076-parameterizing-correlations-between-hydrometeor-species-mixed-phase-arctic-clouds"><span>Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen</p> <p>2011-08-16</p> <p>Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ascl.soft02015M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ascl.soft02015M"><span>mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris</p> <p>2018-02-01</p> <p>mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H54D..05S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H54D..05S"><span>Evaluation of Multiple Mechanistic Hypotheses of Leaf Photosynthesis and Stomatal Conductance against Diurnal and Seasonal Data from Two Contrasting Panamanian Tropical Forests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Serbin, S.; Walker, A. P.; Wu, J.; Ely, K.; Rogers, A.; Wolfe, B.</p> <p>2017-12-01</p> <p>Tropical forests play a key role in regulating the global carbon (C), water, and energy cycles and stores, as well as influence climate through the exchanges of mass and energy with the atmosphere. However, projected changes in temperature and precipitation patterns are expected to impact the tropics and the strength of the tropical C sink, likely resulting in significant climate feedbacks. Moreover, the impact of stronger, longer, and more extensive droughts not well understood. Critical for the accurate modeling of the tropical C and water cycle in Earth System Models (ESMs) is the representation of the coupled photosynthetic and stomatal conductance processes and how these processes are impacted by environmental and other drivers. Moreover, the parameterization and representation of these processes is an important consideration for ESM projections. We use a novel model framework, the Multi-Assumption Architecture and Testbed (MAAT), together with the open-source bioinformatics toolbox, the Predictive Ecosystem Analyzer (PEcAn), to explore the impact of the multiple mechanistic hypotheses of coupled photosynthesis and stomatal conductance as well as the additional uncertainty related to model parameterization. Our goal was to better understand how model choice and parameterization influences diurnal and seasonal modeling of leaf-level photosynthesis and stomatal conductance. We focused on the 2016 ENSO period and starting in February, monthly measurements of diurnal photosynthesis and conductance were made on 7-9 dominant species at the two Smithsonian canopy crane sites. This benchmark dataset was used to test different representations of stomatal conductance and photosynthetic parameterizations with the MAAT model, running within PEcAn. The MAAT model allows for the easy selection of competing hypotheses to test different photosynthetic modeling approaches while PEcAn provides the ability to explore the uncertainties introduced through parameterization. We found that stomatal choice can play a large role in model-data mismatch and observational constraints can be used to reduce simulated model spread, but can also result in large model disagreements with measurements. These results will be used to help inform the modeling of photosynthesis in tropical systems for the larger ESM community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.nrel.gov/research/joshua-vermaas.html','SCIGOVWS'); return false;" href="https://www.nrel.gov/research/joshua-vermaas.html"><span>Joshua Vermaas | NREL</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p><em>molecular</em> dynamics simulations to explore biological interfaces, such as those found at the cell membrane or in lignocellulosic biomass. In particular, <em>molecular</em> dynamics can see in <em>molecular</em> detail the research toward fruitful results. Areas of Expertise <em>Molecular</em> dynamics Compound parameterization</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007ACP.....7.2183C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007ACP.....7.2183C"><span>A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cariolle, D.; Teyssèdre, H.</p> <p>2007-05-01</p> <p>This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of ozone data, the present parameterization gives a valuable alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=minimalist&pg=3&id=ED518440','ERIC'); return false;" href="https://eric.ed.gov/?q=minimalist&pg=3&id=ED518440"><span>Contrasting Causatives: A Minimalist Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Tubino Blanco, Mercedes</p> <p>2010-01-01</p> <p>This dissertation explores the mechanisms behind the linguistic expression of causation in English, Hiaki (Uto-Aztecan) and Spanish. Pylkkanen's (2002, 2008) analysis of causatives as dependent on the parameterization of the functional head v[subscript CAUSE] is chosen as a point of departure. The studies conducted in this dissertation confirm…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4684325','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4684325"><span>The Essential Complexity of Auditory Receptive Fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Thorson, Ivar L.; Liénard, Jean; David, Stephen V.</p> <p>2015-01-01</p> <p>Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A31E2225L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A31E2225L"><span>Evaluation of WRF physical parameterizations against ARM/ASR Observations in the post-cold-frontal region to improve low-level clouds representation in CAM5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lamraoui, F.; Booth, J. F.; Naud, C. M.</p> <p>2017-12-01</p> <p>The representation of subgrid-scale processes of low-level marine clouds located in the post-cold-frontal region poses a serious challenge for climate models. More precisely, the boundary layer parameterizations are predominantly designed for individual regimes that can evolve gradually over time and does not accommodate the cold front passage that can overly modify the boundary layer rapidly. Also, the microphysics schemes respond differently to the quick development of the boundary layer schemes, especially under unstable conditions. To improve the understanding of cloud physics in the post-cold frontal region, the present study focuses on exploring the relationship between cloud properties, the local processes and large-scale conditions. In order to address these questions, we explore the WRF sensitivity to the interaction between various combinations of the boundary layer and microphysics parameterizations, including the Community Atmospheric Model version 5 (CAM5) physical package in a perturbed physics ensemble. Then, we evaluate these simulations against ground-based ARM observations over the Azores. The WRF-based simulations demonstrate particular sensitivities of the marine cold front passage and the associated post-cold frontal clouds to the domain size, the resolution and the physical parameterizations. First, it is found that in multiple different case studies the model cannot generate the cold front passage when the domain size is larger than 3000 km2. Instead, the modeled cold front stalls, which shows the importance of properly capturing the synoptic scale conditions. The simulation reveals persistent delay in capturing the cold front passage and also an underestimated duration of the post-cold-frontal conditions. Analysis of the perturbed physics ensemble shows that changing the microphysics scheme leads to larger differences in the modeled clouds than changing the boundary layer scheme. The in-cloud heating tendencies are analyzed to explain this sensitivity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.H23B0871H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.H23B0871H"><span>Large-Scale CTRW Analysis of Push-Pull Tracer Tests and Other Transport in Heterogeneous Porous Media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hansen, S. K.; Berkowitz, B.</p> <p>2014-12-01</p> <p>Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21678137','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21678137"><span>Thermodynamic properties for applications in chemical industry via classical force fields.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran</p> <p>2012-01-01</p> <p>Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920031387&hterms=churchill&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dchurchill','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920031387&hterms=churchill&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dchurchill"><span>Radiatively driven stratosphere-troposphere interactions near the tops of tropical cloud clusters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Churchill, Dean D.; Houze, Robert A., Jr.</p> <p>1990-01-01</p> <p>Results are presented of two numerical simulations of the mechanism involved in the dehydration of air, using the model of Churchill (1988) and Churchill and Houze (1990) which combines the water and ice physics parameterizations and IR and solar-radiation parameterization with a convective adjustment scheme in a kinematic nondynamic framework. One simulation, a cirrus cloud simulation, was to test the Danielsen (1982) hypothesis of a dehydration mechanism for the stratosphere; the other was to simulate the mesoscale updraft in order to test an alternative mechanism for 'freeze-drying' the air. The results show that the physical processes simulated in the mesoscale updraft differ from those in the thin-cirrus simulation. While in the thin-cirrus case, eddy fluxes occur in response to IR radiative destabilization, and, hence, no net transfer occurs between troposphere and stratosphere, the mesosphere updraft case has net upward mass transport into the lower stratosphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3146263','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3146263"><span>Local Minima Free Parameterized Appearance Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nguyen, Minh Hoai; De la Torre, Fernando</p> <p>2010-01-01</p> <p>Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/46838','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/46838"><span>Parameterization of the 3-PG model for Pinus elliottii stands using alternative methods to estimate fertility rating, biomass partitioning and canopy closure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc</p> <p>2014-01-01</p> <p>The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050009898','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050009898"><span>Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hull, Patrick V.; Canfield, Stephen</p> <p>2004-01-01</p> <p>The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_4 --> <div id="page_5" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="81"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H23C1675J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H23C1675J"><span>Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jayathilake, D. I.; Smith, T. J.</p> <p>2017-12-01</p> <p>Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940011553','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940011553"><span>Research into the influence of spatial variability and scale on the parameterization of hydrological processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wood, Eric F.</p> <p>1993-01-01</p> <p>The objectives of the research were as follows: (1) Extend the Representative Elementary Area (RE) concept, first proposed and developed in Wood et al, (1988), to the water balance fluxes of the interstorm period (redistribution, evapotranspiration and baseflow) necessary for the analysis of long-term water balance processes. (2) Derive spatially averaged water balance model equations for spatially variable soil, topography and vegetation, over A RANGE OF CLIMATES. This is a necessary step in our goal to derive consistent hydrologic results up to GCM grid scales necessary for global climate modeling. (3) Apply the above macroscale water balance equations with remotely sensed data and begin to explore the feasibility of parameterizing the water balance constitutive equations at GCM grid scale.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H51C1482T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H51C1482T"><span>Parametric Analysis of the feasibility of low-temperature geothermal heat recovery in sedimentary basins</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tomac, I.; Caulk, R.</p> <p>2016-12-01</p> <p>The current study explored the feasibility of heat recovery through the installation of heat exchangers in abandoned oil and gas wells. Finite Element Methods (FEM) were employed to determine the effects of various site specific parameters on production fluid temperature. Specifically, the study parameterized depth of well, subsurface temperature gradient, sedimentary rock conductivity, and flow rate. Results show that greater well depth is associated with greater heat flow, with the greatest returns occurring between depths of 1.5 km and 7 km. Beyond 7 km, the rate of return decreases due to a non-linear increase of heat flow combined with a continued linear increase of pumping cost. One cause for the drop of heat flow was the loss of heat as the fluid travels from depth to the surface. Further analyses demonstrated the benefit of an alternative heat exchanger configuration characterized by thermally insulated sections of the upward heat exchanger. These simulations predict production fluid temperature gains between 5 - 10 oC, which may be suitable for geothermal heat pump applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140013020','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140013020"><span>Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.</p> <p>2014-01-01</p> <p>The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=322970&keyword=human%20health%20risk%20assessment&subject=human%20health%20risk%20assessment%20research&showcriteria=2&datebeginpublishedpresented=10/19/2011&dateendpublishedpresented=10/19/2016&sortby=pubdateyear&','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=322970&keyword=human%20health%20risk%20assessment&subject=human%20health%20risk%20assessment%20research&showcriteria=2&datebeginpublishedpresented=10/19/2011&dateendpublishedpresented=10/19/2016&sortby=pubdateyear&"><span>Assessing model uncertainty using hexavalent chromium and ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1913085S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1913085S"><span>Shortwave flux at the surface of the Atlantic Ocean: in-situ measurements, satellite data and parametrization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sinitsyn, Alexey</p> <p>2017-04-01</p> <p>Shortwave radiation is one of the key air-sea flux components playing an important role in on the ocean heat balance. The most accurate method to obtaining estimates of shortwave fluxes are the field measurements at various locations at the globe. However, these data are very sparse. Different satellite missions and re-analyses provide alternative source of short-wave radiation data, however they need are source for uncertainties and need to be validated. An alternative way to produce long-term time series of shortwave radiation is to apply bulk parameterizations of shortwave radiation to the observations of Voluntary Observing Ship (VOS) cloud data or to the cloud measurements from CM-SAF. In our work, we compare three sources of shortwave flux estimates. In-situ measurements were obtained during 12 cruises (320 day of measurements) of research cruises in different regions of the Atlantic Ocean from 2004 to 2014. Shortwave radiation was measured by the Kipp&Zonen net radiometer CNR-1. Also during the cruise, standard meteorological observations were carried out. Satellite data were the hourly and daily time series of the incoming shortwave radiation with spatial resolution 0.05x0.05 degree (METEOSAT MSG coverage Europe, Africa, Atlantic Ocean), and were obtained by the MVIRI/SEVIRI instrument from METEOSAT. SEVIRI cloud properties were taken from CLAAS-2 data record from CM-SAF. Parameterizations of shortwave fluxes used consisted of three different schemes based upon consideration of only total as well as total and low cloud cover. The incoming shortwave radiation retrieved by satellite had a positive bias of 3 Wm-2 and RMS of 69 Wm-2 compared to in-situ measurements. For different Octa categories the bias was from 1 to 5 Wm-2 and RMS from 41 to 71 Wm-2. The incoming shortwave radiation computed by bulk parameterization indicated a bias of -10 Wm-2 to 60 Wm-2 depending on the scheme and the region of the Atlantic Ocean. The results of the comparison suggest that satellite data is an excellent ground for testing bulk parameterizations of incoming shortwave radiation. Among the bulk paramterizations, the IORAS/SAIL scheme is the least biased algorithm for computing shortwave radiation from cloud observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1225332','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1225332"><span>Two Approaches to Calibration in Metrology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Campanelli, Mark</p> <p>2014-04-01</p> <p>Inferring mathematical relationships with quantified uncertainty from measurement data is common to computational science and metrology. Sufficient knowledge of measurement process noise enables Bayesian inference. Otherwise, an alternative approach is required, here termed compartmentalized inference, because collection of uncertain data and model inference occur independently. Bayesian parameterized model inference is compared to a Bayesian-compatible compartmentalized approach for ISO-GUM compliant calibration problems in renewable energy metrology. In either approach, model evidence can help reduce model discrepancy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.afdc.energy.gov/case/2849','SCIGOVWS'); return false;" href="https://www.afdc.energy.gov/case/2849"><span>Alternative Fuels Data Center: Johnson Space Center Explores Alternative</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.science.gov/aboutsearch.html">Science.gov Websites</a></p> <p></p> <p></p> <p>Fuel Vehicles</A> <em>Johnson</em> Space Center Explores Alternative Fuel Vehicles to someone by E-mail Share Alternative Fuels Data Center: <em>Johnson</em> Space Center Explores Alternative Fuel Vehicles on Facebook Tweet about Alternative Fuels Data Center: <em>Johnson</em> Space Center Explores Alternative Fuel Vehicles on</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22660913-exploring-biases-atmospheric-retrievals-simulated-jwst-transmission-spectra-hot-jupiters','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22660913-exploring-biases-atmospheric-retrievals-simulated-jwst-transmission-spectra-hot-jupiters"><span>EXPLORING BIASES OF ATMOSPHERIC RETRIEVALS IN SIMULATED JWST TRANSMISSION SPECTRA OF HOT JUPITERS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rocchetto, M.; Waldmann, I. P.; Tinetti, G.</p> <p>2016-12-10</p> <p>With a scheduled launch in 2018 October, the James Webb Space Telescope ( JWST ) is expected to revolutionize the field of atmospheric characterization of exoplanets. The broad wavelength coverage and high sensitivity of its instruments will allow us to extract far more information from exoplanet spectra than what has been possible with current observations. In this paper, we investigate whether current retrieval methods will still be valid in the era of JWST , exploring common approximations used when retrieving transmission spectra of hot Jupiters. To assess biases, we use 1D photochemical models to simulate typical hot Jupiter cloud-free atmospheresmore » and generate synthetic observations for a range of carbon-to-oxygen ratios. Then, we retrieve these spectra using TauREx, a Bayesian retrieval tool, using two methodologies: one assuming an isothermal atmosphere, and one assuming a parameterized temperature profile. Both methods assume constant-with-altitude abundances. We found that the isothermal approximation biases the retrieved parameters considerably, overestimating the abundances by about one order of magnitude. The retrieved abundances using the parameterized profile are usually within 1 σ of the true state, and we found the retrieved uncertainties to be generally larger compared to the isothermal approximation. Interestingly, we found that by using the parameterized temperature profile we could place tight constraints on the temperature structure. This opens the possibility of characterizing the temperature profile of the terminator region of hot Jupiters. Lastly, we found that assuming a constant-with-altitude mixing ratio profile is a good approximation for most of the atmospheres under study.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMNG14A..02C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMNG14A..02C"><span>Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Christensen, H. M.; Moroz, I.; Palmer, T.</p> <p>2015-12-01</p> <p>It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/46594','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/46594"><span>The effects of changing land cover on streamflow simulation in Puerto Rico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>A.E. Van Beusekom; L.E. Hay; R.J. Viger; W.A. Gould; J.A. Collazo; A. Henareh Khalyani</p> <p>2014-01-01</p> <p>This study quantitatively explores whether land cover changes have a substantive impact on simulated streamflow within the tropical island setting of Puerto Rico. The Precipitation Runoff Modeling System (PRMS) was used to compare streamflow simulations based on five static parameterizations of land cover with those based on dynamically varying parameters derived from...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1429613','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1429613"><span>Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.</p> <p></p> <p>Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A43B2452A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A43B2452A"><span>Dependence of stratocumulus-topped boundary-layer entrainment on cloud-water sedimentation: Impact on global aerosol indirect effect in GISS ModelE3 single column model and global simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ackerman, A. S.; Kelley, M.; Cheng, Y.; Fridlind, A. M.; Del Genio, A. D.; Bauer, S.</p> <p>2017-12-01</p> <p>Reduction in cloud-water sedimentation induced by increasing droplet concentrations has been shown in large-eddy simulations (LES) and direct numerical simulation (DNS) to enhance boundary-layer entrainment, thereby reducing cloud liquid water path and offsetting the Twomey effect when the overlying air is sufficiently dry, which is typical. Among recent upgrades to ModelE3, the latest version of the NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), are a two-moment stratiform cloud microphysics treatment with prognostic precipitation and a moist turbulence scheme that includes an option in its entrainment closure of a simple parameterization for the effect of cloud-water sedimentation. Single column model (SCM) simulations are compared to LES results for a stratocumulus case study and show that invoking the sedimentation-entrainment parameterization option indeed reduces the dependence of cloud liquid water path on increasing aerosol concentrations. Impacts of variations of the SCM configuration and the sedimentation-entrainment parameterization will be explored. Its impact on global aerosol indirect forcing in the framework of idealized atmospheric GCM simulations will also be assessed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15313624','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15313624"><span>Structural test of the parameterized-backbone method for protein design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom</p> <p>2004-09-03</p> <p>Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1227629','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1227629"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Saenz, Juan A.; Chen, Qingshan; Ringler, Todd</p> <p></p> <p>Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1391678-exploring-stratocumulus-cloud-top-entrainment-processes-parameterizations-using-doppler-cloud-radar-observations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1391678-exploring-stratocumulus-cloud-top-entrainment-processes-parameterizations-using-doppler-cloud-radar-observations"><span>Exploring Stratocumulus Cloud-Top Entrainment Processes and Parameterizations by Using Doppler Cloud Radar Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Albrecht, Bruce; Fang, Ming; Ghate, Virendra</p> <p>2016-02-01</p> <p>Observations from an upward-pointing Doppler cloud radar are used to examine cloud-top entrainment processes and parameterizations in a non-precipitating continental stratocumulus cloud deck maintained by time varying surface buoyancy fluxes and cloud-top radiative cooling. Radar and ancillary observations were made at the Atmospheric Radiation Measurement (ARM)’s Southern Great Plains (SGP) site located near Lamont, Oklahoma of unbroken, non-precipitating stratocumulus clouds observed for a 14-hour period starting 0900 Central Standard Time on 25 March 2005. The vertical velocity variance and energy dissipation rate (EDR) terms in a parameterized turbulence kinetic energy (TKE) budget of the entrainment zone are estimated using themore » radar vertical velocity and the radar spectrum width observations from the upward-pointing millimeter cloud radar (MMCR) operating at the SGP site. Hourly averages of the vertical velocity variance term in the TKE entrainment formulation correlates strongly (r=0.72) to the dissipation rate term in the entrainment zone. However, the ratio of the variance term to the dissipation decreases at night due to decoupling of the boundary layer. When the night -time decoupling is accounted for, the correlation between the variance and the EDR term increases (r=0.92). To obtain bulk coefficients for the entrainment parameterizations derived from the TKE budget, independent estimate of entrainment were obtained from an inversion height budget using ARM SGP observations of the local time derivative and the horizontal advection of the cloud-top height. The large-scale vertical velocity at the inversion needed for this budget from EMWF reanalysis. This budget gives a mean entrainment rate for the observing period of 0.76±0.15 cm/s. This mean value is applied to the TKE budget parameterizations to obtain the bulk coefficients needed in these parameterizations. These bulk coefficients are compared with those from previous and are used to in the parameterizations to give hourly estimates of the entrainment rates using the radar derived vertical velocity variance and dissipation rates. Hourly entrainment rates were estimated from a convective velocity w* parameterization depends on the local surface buoyancy fluxes and the calculated radiative flux divergence, parameterization using a bulk coefficient obtained from the mean inversion height budget. The hourly rates from the cloud turbulence estimates and the w* parameterization, which is independent of the radar observations, are compared with the hourly we values from the budget. All show rough agreement with each other and capture the entrainment variability associated with substantial changes in the surface flux and radiative divergence at cloud top. Major uncertainties in the hourly estimates from the height budget and w* are discussed. The results indicate a strong potential for making entrainment rate estimates directly from the radar vertical velocity variance and the EDR measurements—a technique that has distinct advantages over other methods for estimating entrainment rates. Calculations based on the EDR alone can provide high temporal resolution (for averaging intervals as small as 10 minutes) of the entrainment processes and do not require an estimate of the boundary layer depth, which can be difficult to define when the boundary layer is decoupled.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24358234','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24358234"><span>Modeling social transmission dynamics of unhealthy behaviors for evaluating prevention and treatment interventions on childhood obesity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Frerichs, Leah M; Araz, Ozgur M; Huang, Terry T-K</p> <p>2013-01-01</p> <p>Research evidence indicates that obesity has spread through social networks, but lever points for interventions based on overlapping networks are not well studied. The objective of our research was to construct and parameterize a system dynamics model of the social transmission of behaviors through adult and youth influence in order to explore hypotheses and identify plausible lever points for future childhood obesity intervention research. Our objectives were: (1) to assess the sensitivity of childhood overweight and obesity prevalence to peer and adult social transmission rates, and (2) to test the effect of combinations of prevention and treatment interventions on the prevalence of childhood overweight and obesity. To address the first objective, we conducted two-way sensitivity analyses of adult-to-child and child-to-child social transmission in relation to childhood overweight and obesity prevalence. For the second objective, alternative combinations of prevention and treatment interventions were tested by varying model parameters of social transmission and weight loss behavior rates. Our results indicated child overweight and obesity prevalence might be slightly more sensitive to the same relative change in the adult-to-child compared to the child-to-child social transmission rate. In our simulations, alternatives with treatment alone, compared to prevention alone, reduced the prevalence of childhood overweight and obesity more after 10 years (1.2-1.8% and 0.2-1.0% greater reduction when targeted at children and adults respectively). Also, as the impact of adult interventions on children was increased, the rank of six alternatives that included adults became better (i.e., resulting in lower 10 year childhood overweight and obesity prevalence) than alternatives that only involved children. The findings imply that social transmission dynamics should be considered when designing both prevention and treatment intervention approaches. Finally, targeting adults may be more efficient, and research should strengthen and expand adult-focused interventions that have a high residual impact on children.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3866177','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3866177"><span>Modeling Social Transmission Dynamics of Unhealthy Behaviors for Evaluating Prevention and Treatment Interventions on Childhood Obesity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Frerichs, Leah M.; Araz, Ozgur M.; Huang, Terry T. – K.</p> <p>2013-01-01</p> <p>Research evidence indicates that obesity has spread through social networks, but lever points for interventions based on overlapping networks are not well studied. The objective of our research was to construct and parameterize a system dynamics model of the social transmission of behaviors through adult and youth influence in order to explore hypotheses and identify plausible lever points for future childhood obesity intervention research. Our objectives were: (1) to assess the sensitivity of childhood overweight and obesity prevalence to peer and adult social transmission rates, and (2) to test the effect of combinations of prevention and treatment interventions on the prevalence of childhood overweight and obesity. To address the first objective, we conducted two-way sensitivity analyses of adult-to-child and child-to-child social transmission in relation to childhood overweight and obesity prevalence. For the second objective, alternative combinations of prevention and treatment interventions were tested by varying model parameters of social transmission and weight loss behavior rates. Our results indicated child overweight and obesity prevalence might be slightly more sensitive to the same relative change in the adult-to-child compared to the child-to-child social transmission rate. In our simulations, alternatives with treatment alone, compared to prevention alone, reduced the prevalence of childhood overweight and obesity more after 10 years (1.2–1.8% and 0.2–1.0% greater reduction when targeted at children and adults respectively). Also, as the impact of adult interventions on children was increased, the rank of six alternatives that included adults became better (i.e., resulting in lower 10 year childhood overweight and obesity prevalence) than alternatives that only involved children. The findings imply that social transmission dynamics should be considered when designing both prevention and treatment intervention approaches. Finally, targeting adults may be more efficient, and research should strengthen and expand adult-focused interventions that have a high residual impact on children. PMID:24358234</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=323270&keyword=human%20health%20risk%20assessment&subject=human%20health%20risk%20assessment%20research&showcriteria=2&datebeginpublishedpresented=10/19/2011&dateendpublishedpresented=10/19/2016&sortby=pubdateyear&','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=323270&keyword=human%20health%20risk%20assessment&subject=human%20health%20risk%20assessment%20research&showcriteria=2&datebeginpublishedpresented=10/19/2011&dateendpublishedpresented=10/19/2016&sortby=pubdateyear&"><span>Assessing uncertainty in published risk estimates using ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040084612','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040084612"><span>Shortcomings with Tree-Structured Edge Encodings for Neural Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hornby, Gregory S.</p> <p>2004-01-01</p> <p>In evolutionary algorithms a common method for encoding neural networks is to use a tree structured assembly procedure for constructing them. Since node operators have difficulties in specifying edge weights and these operators are execution-order dependent, an alternative is to use edge operators. Here we identify three problems with edge operators: in the initialization phase most randomly created genotypes produce an incorrect number of inputs and outputs; variation operators can easily change the number of input/output (I/O) units; and units have a connectivity bias based on their order of creation. Instead of creating I/O nodes as part of the construction process we propose using parameterized operators to connect to preexisting I/O units. Results from experiments show that these parameterized operators greatly improve the probability of creating and maintaining networks with the correct number of I/O units, remove the connectivity bias with I/O units and produce better controllers for a goal-scoring task.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_5 --> <div id="page_6" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="101"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22391228-polychromatic-sparse-image-reconstruction-mass-attenuation-spectrum-estimation-via-spline-basis-function-expansion','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22391228-polychromatic-sparse-image-reconstruction-mass-attenuation-spectrum-estimation-via-spline-basis-function-expansion"><span>Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu</p> <p>2015-03-31</p> <p>We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PMB....61.6833T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PMB....61.6833T"><span>Patch-based image reconstruction for PET using prior-image derived dictionaries</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tahaei, Marzieh S.; Reader, Andrew J.</p> <p>2016-09-01</p> <p>In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29190389','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29190389"><span>A Bayesian Approach for Measurements of Stray Neutrons at Proton Therapy Facilities: Quantifying Neutron Dose Uncertainty.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B</p> <p>2017-11-28</p> <p>Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A51F0129C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A51F0129C"><span>Impact of Stochastic Parameterization Schemes on Coupled and Uncoupled Climate Simulations with the Community Earth System Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Christensen, H. M.; Berner, J.; Coleman, D.; Palmer, T.</p> <p>2015-12-01</p> <p>Stochastic parameterizations have been used for more than a decade in atmospheric models to represent the variability of unresolved sub-grid processes. They have a beneficial effect on the spread and mean state of medium- and extended-range forecasts (Buizza et al. 1999, Palmer et al. 2009). There is also increasing evidence that stochastic parameterization of unresolved processes could be beneficial for the climate of an atmospheric model through noise enhanced variability, noise-induced drift (Berner et al. 2008), and by enabling the climate simulator to explore other flow regimes (Christensen et al. 2015; Dawson and Palmer 2015). We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. The SPPT scheme accounts for uncertainty in the CAM physical parameterization schemes, including the convection scheme, by perturbing the parametrised temperature, moisture and wind tendencies with a multiplicative noise term. SPPT results in a large improvement in the variability of the CAM4 modeled climate. In particular, SPPT results in a significant improvement to the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. References: Berner, J., Doblas-Reyes, F. J., Palmer, T. N., Shutts, G. J., & Weisheimer, A., 2008. Phil. Trans. R. Soc A, 366, 2559-2577 Buizza, R., Miller, M. and Palmer, T. N., 1999. Q.J.R. Meteorol. Soc., 125, 2887-2908. Christensen, H. M., I. M. Moroz & T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2239-9 Dawson, A. and T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2238-x Palmer, T.N., R. Buizza, F. Doblas-Reyes, et al., 2009, ECMWF technical memorandum 598.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.A54F..01S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.A54F..01S"><span>Intercomparison Project on Parameterizations of Large-Scale Dynamics for Simulations of Tropical Convection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.</p> <p>2013-12-01</p> <p>Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1018438','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1018438"><span>Summary and Findings of the ARL Dynamic Failure Forum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-09-29</p> <p>short beam shear, quasi -static indentation, depth of penetration, and V50 limit velocity. o Experimental technique suggestions for improvement included...art in experimental , theoretical, and computational studies of dynamic failure. The forum also focused on identifying technologies and approaches...Army-specific problems. Experimental exploration of material behavior and an improved ability to parameterize material models is essential to improving</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A53G..01O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A53G..01O"><span>Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.</p> <p>2015-12-01</p> <p>Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080012640','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080012640"><span>Topology of the Relative Motion: Circular and Eccentric Reference Orbit Cases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>FontdecabaiBaig, Jordi; Metris, Gilles; Exertier, Pierre</p> <p>2007-01-01</p> <p>This paper deals with the topology of the relative trajectories in flight formations. The purpose is to study the different types of relative trajectories, their degrees of freedom, and to give an adapted parameterization. The paper also deals with the research of local circular motions. Even if they exist only when the reference orbit is circular, we extrapolate initial conditions to the eccentric reference orbit case.This alternative approach is complementary with traditional approaches in terms of cartesian coordinates or differences of orbital elements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19970026362&hterms=conjugate+gradient&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dconjugate%2Bgradient','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19970026362&hterms=conjugate+gradient&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dconjugate%2Bgradient"><span>Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pandya, Mohagna J.; Baysal, Oktay</p> <p>1997-01-01</p> <p>A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..211a2002J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..211a2002J"><span>Calculation of phonon dispersion relation using new correlation functional</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jitropas, Ukrit; Hsu, Chung-Hao</p> <p>2017-06-01</p> <p>To extend the use of Local Density Approximation (LDA), a new analytical correlation functional is introduced. Correlation energy is an essential ingredient within density functional theory and used to determine ground state energy and other properties including phonon dispersion relation. Except for high and low density limit, the general expression of correlation energy is unknown. The approximation approach is therefore required. The accuracy of the modelling system depends on the quality of correlation energy approximation. Typical correlation functionals used in LDA such as Vosko-Wilk-Nusair (VWN) and Perdew-Wang (PW) were obtained from parameterizing the near-exact quantum Monte Carlo data of Ceperley and Alder. These functionals are presented in complex form and inconvenient to implement. Alternatively, the latest published formula of Chachiyo correlation functional provides a comparable result for those much more complicated functionals. In addition, it provides more predictive power based on the first principle approach, not fitting functionals. Nevertheless, the performance of Chachiyo formula for calculating phonon dispersion relation (a key to the thermal properties of materials) has not been tested yet. Here, the implementation of new correlation functional to calculate phonon dispersion relation is initiated. The accuracy and its validity will be explored.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S13C0681T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S13C0681T"><span>Evaluation of the site effect with Heuristic Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Torres, N. N.; Ortiz-Aleman, C.</p> <p>2017-12-01</p> <p>The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JASMS..25.1824I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JASMS..25.1824I"><span>Orders of Magnitude Extension of the Effective Dynamic Range of TDC-Based TOFMS Data Through Maximum Likelihood Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ipsen, Andreas; Ebbels, Timothy M. D.</p> <p>2014-10-01</p> <p>In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70191007','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70191007"><span>Combining state-and-transition simulations and species distribution models to anticipate the effects of climate change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Miller, Brian W.; Frid, Leonardo; Chang, Tony; Piekielek, N. B.; Hansen, Andrew J.; Morisette, Jeffrey T.</p> <p>2015-01-01</p> <p>State-and-transition simulation models (STSMs) are known for their ability to explore the combined effects of multiple disturbances, ecological dynamics, and management actions on vegetation. However, integrating the additional impacts of climate change into STSMs remains a challenge. We address this challenge by combining an STSM with species distribution modeling (SDM). SDMs estimate the probability of occurrence of a given species based on observed presence and absence locations as well as environmental and climatic covariates. Thus, in order to account for changes in habitat suitability due to climate change, we used SDM to generate continuous surfaces of species occurrence probabilities. These data were imported into ST-Sim, an STSM platform, where they dictated the probability of each cell transitioning between alternate potential vegetation types at each time step. The STSM was parameterized to capture additional processes of vegetation growth and disturbance that are relevant to a keystone species in the Greater Yellowstone Ecosystem—whitebark pine (Pinus albicaulis). We compared historical model runs against historical observations of whitebark pine and a key disturbance agent (mountain pine beetle, Dendroctonus ponderosae), and then projected the simulation into the future. Using this combination of correlative and stochastic simulation models, we were able to reproduce historical observations and identify key data gaps. Results indicated that SDMs and STSMs are complementary tools, and combining them is an effective way to account for the anticipated impacts of climate change, biotic interactions, and disturbances, while also allowing for the exploration of management options.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A14F..01S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A14F..01S"><span>High-resolution RCMs as pioneers for future GCMs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.</p> <p>2017-12-01</p> <p>Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data sets, the assessment of regional-scale climate feedback processes, and the development of alternative output analysis methodologies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1227629-prognostic-residual-mean-flow-ocean-general-circulation-model-its-relation-prognostic-eulerian-mean-flow','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1227629-prognostic-residual-mean-flow-ocean-general-circulation-model-its-relation-prognostic-eulerian-mean-flow"><span>Prognostic residual mean flow in an ocean general circulation model and its relation to prognostic Eulerian mean flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Saenz, Juan A.; Chen, Qingshan; Ringler, Todd</p> <p>2015-05-19</p> <p>Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19499551','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19499551"><span>Meta-analysis of diagnostic accuracy studies accounting for disease prevalence: alternative parameterizations and model selection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles</p> <p>2009-08-15</p> <p>In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1369206-explaining-convector-effect-canopy-turbulence-means-large-eddy-simulation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1369206-explaining-convector-effect-canopy-turbulence-means-large-eddy-simulation"><span>Explaining the convector effect in canopy turbulence by means of large-eddy simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias</p> <p>2017-06-20</p> <p>Semi-arid forests are found to sustain a massive sensible heat flux in spite of having a low surface to air temperature difference by lowering the aerodynamic resistance to heat transfer ( r H) – a property called the canopy convector effect (CCE). In this work large-eddy simulations are used to demonstrate that the CCE appears more generally in canopy turbulence. It is indeed a generic feature of canopy turbulence: r H of a canopy is found to reduce with increasing unstable stratification, which effectively increases the aerodynamic roughness for the same physical roughness of the canopy. This relation offers a sufficientmore » condition to construct a general description of the CCE. In addition, we review existing parameterizations for r H from the evapotranspiration literature and test to what extent they are able to capture the CCE, thereby exploring the possibility of an improved parameterization.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5027669','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5027669"><span>Clustering Tree-structured Data on Manifold</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lu, Na; Miao, Hongyu</p> <p>2016-01-01</p> <p>Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ACPD...12.7545Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ACPD...12.7545Z"><span>The global aerosol-climate model ECHAM-HAM, version 2: sensitivity to improvements in process representations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, K.; O'Donnell, D.; Kazil, J.; Stier, P.; Kinne, S.; Lohmann, U.; Ferrachat, S.; Croft, B.; Quaas, J.; Wan, H.; Rast, S.; Feichter, J.</p> <p>2012-03-01</p> <p>This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Sensitivity experiments are carried out to analyse the effects of these improvements in the process representation on the simulated aerosol properties and global distribution. The new parameterizations that have largest impact on the global mean aerosol optical depth and radiative effects turn out to be the water uptake scheme and cloud microphysics. The former leads to a significant decrease of aerosol water contents in the lower troposphere, and consequently smaller optical depth; the latter results in higher aerosol loading and longer lifetime due to weaker in-cloud scavenging. The combined effects of the new/updated parameterizations are demonstrated by comparing the new model results with those from the earlier version, and against observations. Model simulations are evaluated in terms of aerosol number concentrations against measurements collected from twenty field campaigns as well as from fixed measurement sites, and in terms of optical properties against the AERONET measurements. Results indicate a general improvement with respect to the earlier version. The aerosol size distribution and spatial-temporal variance simulated by HAM2 are in better agreement with the observations. Biases in the earlier model version in aerosol optical depth and in the Ångström parameter have been reduced. The paper also points out the remaining model deficiencies that need to be addressed in the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.A13D0342L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.A13D0342L"><span>Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lin, W.; Liu, Y.; Song, H.; Endo, S.</p> <p>2011-12-01</p> <p>Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.S33C2435B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.S33C2435B"><span>Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.</p> <p>2013-12-01</p> <p>Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1818220B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1818220B"><span>A Simple Model Framework to Explore the Deeply Uncertain, Local Sea Level Response to Climate Change. A Case Study on New Orleans, Louisiana</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bakker, Alexander; Louchard, Domitille; Keller, Klaus</p> <p>2016-04-01</p> <p>Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080043622','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080043622"><span>Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Norbury, John W.; Blattnig, Steve R.</p> <p>2008-01-01</p> <p>Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CMT...tmp...66R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CMT...tmp...66R"><span>Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris</p> <p>2018-04-01</p> <p>An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070035892','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070035892"><span>Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hull, P. V.; Tinker, M. L.</p> <p>2007-01-01</p> <p>Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFM.H41I..01T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFM.H41I..01T"><span>Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tsai, F. T.; Li, X.</p> <p>2006-12-01</p> <p>Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1435594-ecosystem-biogeochemistry-model-parameterization-do-more-flux-data-result-better-model-predicting-carbon-flux','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1435594-ecosystem-biogeochemistry-model-parameterization-do-more-flux-data-result-better-model-predicting-carbon-flux"><span>Ecosystem biogeochemistry model parameterization: Do more flux data result in a better model in predicting carbon flux?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Zhu, Qing; Zhuang, Qianlai</p> <p>2015-12-21</p> <p>Reliability of terrestrial ecosystem models highly depends on the quantity and quality of thedata that have been used to calibrate the models. Nowadays, in situ observations of carbon fluxes areabundant. However, the knowledge of how much data (data length) and which subset of the time seriesdata (data period) should be used to effectively calibrate the model is still lacking. This study uses theAmeriFlux carbon flux data to parameterize the Terrestrial Ecosystem Model (TEM) with an adjoint-baseddata assimilation technique for various ecosystem types. Parameterization experiments are thus conductedto explore the impact of both data length and data period on the uncertaintymore » reduction of the posteriormodel parameters and the quantification of site and regional carbon dynamics. We find that: the modelis better constrained when it uses two-year data comparing to using one-year data. Further, two-year datais sufficient in calibrating TEM’s carbon dynamics, since using three-year data could only marginallyimprove the model performance at our study sites; the model is better constrained with the data thathave a higher‘‘climate variability’’than that having a lower one. The climate variability is used to measurethe overall possibility of the ecosystem to experience all climatic conditions including drought and extremeair temperatures and radiation; the U.S. regional simulations indicate that the effect of calibration datalength on carbon dynamics is amplified at regional and temporal scales, leading to large discrepanciesamong different parameterization experiments, especially in July and August. Our findings areconditioned on the specific model we used and the calibration sites we selected. The optimal calibrationdata length may not be suitable for other models. However, this study demonstrates that there may exist athreshold for calibration data length and simply using more data would not guarantee a better modelparameterization and prediction. More importantly, climate variability might be an effective indicator ofinformation within the data, which could help data selection for model parameterization. As a result, we believe ourfindings will benefit the ecosystem modeling community in using multiple-year data to improve modelpredictability.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1435594','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1435594"><span>Ecosystem biogeochemistry model parameterization: Do more flux data result in a better model in predicting carbon flux?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhu, Qing; Zhuang, Qianlai</p> <p></p> <p>Reliability of terrestrial ecosystem models highly depends on the quantity and quality of thedata that have been used to calibrate the models. Nowadays, in situ observations of carbon fluxes areabundant. However, the knowledge of how much data (data length) and which subset of the time seriesdata (data period) should be used to effectively calibrate the model is still lacking. This study uses theAmeriFlux carbon flux data to parameterize the Terrestrial Ecosystem Model (TEM) with an adjoint-baseddata assimilation technique for various ecosystem types. Parameterization experiments are thus conductedto explore the impact of both data length and data period on the uncertaintymore » reduction of the posteriormodel parameters and the quantification of site and regional carbon dynamics. We find that: the modelis better constrained when it uses two-year data comparing to using one-year data. Further, two-year datais sufficient in calibrating TEM’s carbon dynamics, since using three-year data could only marginallyimprove the model performance at our study sites; the model is better constrained with the data thathave a higher‘‘climate variability’’than that having a lower one. The climate variability is used to measurethe overall possibility of the ecosystem to experience all climatic conditions including drought and extremeair temperatures and radiation; the U.S. regional simulations indicate that the effect of calibration datalength on carbon dynamics is amplified at regional and temporal scales, leading to large discrepanciesamong different parameterization experiments, especially in July and August. Our findings areconditioned on the specific model we used and the calibration sites we selected. The optimal calibrationdata length may not be suitable for other models. However, this study demonstrates that there may exist athreshold for calibration data length and simply using more data would not guarantee a better modelparameterization and prediction. More importantly, climate variability might be an effective indicator ofinformation within the data, which could help data selection for model parameterization. As a result, we believe ourfindings will benefit the ecosystem modeling community in using multiple-year data to improve modelpredictability.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ACP....17.9599S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ACP....17.9599S"><span>A ubiquitous ice size bias in simulations of tropical deep convection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stanford, McKenna W.; Varble, Adam; Zipser, Ed; Strapp, J. Walter; Leroy, Delphine; Schwarzenboeck, Alfons; Potts, Rodney; Protat, Alain</p> <p>2017-08-01</p> <p>The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) joint field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions (PSDs), and vertical velocity (w) in high ice water content regions of mature and decaying tropical mesoscale convective systems (MCSs). The resulting dataset is used here to explore causes of the commonly documented high bias in radar reflectivity within cloud-resolving simulations of deep convection. This bias has been linked to overly strong simulated convective updrafts lofting excessive condensate mass but is also modulated by parameterizations of hydrometeor size distributions, single particle properties, species separation, and microphysical processes. Observations are compared with three Weather Research and Forecasting model simulations of an observed MCS using different microphysics parameterizations while controlling for w, TWC, and temperature. Two popular bulk microphysics schemes (Thompson and Morrison) and one bin microphysics scheme (fast spectral bin microphysics) are compared. For temperatures between -10 and -40 °C and TWC > 1 g m-3, all microphysics schemes produce median mass diameters (MMDs) that are generally larger than observed, and the precipitating ice species that controls this size bias varies by scheme, temperature, and w. Despite a much greater number of samples, all simulations fail to reproduce observed high-TWC conditions ( > 2 g m-3) between -20 and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes greater than 1 mm in diameter. Although more mass is distributed to large particle sizes relative to those observed across all schemes when controlling for temperature, w, and TWC, differences with observations are significantly variable between the schemes tested. As a result, this bias is hypothesized to partly result from errors in parameterized hydrometeor PSD and single particle properties, but because it is present in all schemes, it may also partly result from errors in parameterized microphysical processes present in all schemes. Because of these ubiquitous ice size biases, the frequently used microphysical parameterizations evaluated in this study inherently produce a high bias in convective reflectivity for a wide range of temperatures, vertical velocities, and TWCs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..11.9110A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..11.9110A"><span>An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.</p> <p>2009-04-01</p> <p>An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70194838','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70194838"><span>Co-producing simulation models to inform resource management: a case study from southwest South Dakota</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Miller, Brian W.; Symstad, Amy J.; Frid, Leonardo; Fisichelli, Nicholas A.; Schuurman, Gregor W.</p> <p>2017-01-01</p> <p>Simulation models can represent complexities of the real world and serve as virtual laboratories for asking “what if…?” questions about how systems might respond to different scenarios. However, simulation models have limited relevance to real-world applications when designed without input from people who could use the simulated scenarios to inform their decisions. Here, we report on a state-and-transition simulation model of vegetation dynamics that was coupled to a scenario planning process and co-produced by researchers, resource managers, local subject-matter experts, and climate change adaptation specialists to explore potential effects of climate scenarios and management alternatives on key resources in southwest South Dakota. Input from management partners and local experts was critical for representing key vegetation types, bison and cattle grazing, exotic plants, fire, and the effects of climate change and management on rangeland productivity and composition given the paucity of published data on many of these topics. By simulating multiple land management jurisdictions, climate scenarios, and management alternatives, the model highlighted important tradeoffs between grazer density and vegetation composition, as well as between the short- and long-term costs of invasive species management. It also pointed to impactful uncertainties related to the effects of fire and grazing on vegetation. More broadly, a scenario-based approach to model co-production bracketed the uncertainty associated with climate change and ensured that the most important (and impactful) uncertainties related to resource management were addressed. This cooperative study demonstrates six opportunities for scientists to engage users throughout the modeling process to improve model utility and relevance: (1) identifying focal dynamics and variables, (2) developing conceptual model(s), (3) parameterizing the simulation, (4) identifying relevant climate scenarios and management alternatives, (5) evaluating and refining the simulation, and (6) interpreting the results. We also reflect on lessons learned and offer several recommendations for future co-production efforts, with the aim of advancing the pursuit of usable science.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OcMod.120...41L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OcMod.120...41L"><span>Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lin, Shangfei; Sheng, Jinyu</p> <p>2017-12-01</p> <p>Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011GeoRL..38.4805R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011GeoRL..38.4805R"><span>Impact of physical parameterizations on idealized tropical cyclones in the Community Atmosphere Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reed, K. A.; Jablonowski, C.</p> <p>2011-02-01</p> <p>This paper explores the impact of the physical parameterization suite on the evolution of an idealized tropical cyclone within the National Center for Atmospheric Research's (NCAR) Community Atmosphere Model (CAM). The CAM versions 3.1 and 4 are used to study the development of an initially weak vortex in an idealized environment over a 10-day simulation period within an aqua-planet setup. The main distinction between CAM 3.1 and CAM 4 lies within the physical parameterization of deep convection. CAM 4 now includes a dilute plume Convective Available Potential Energy (CAPE) calculation and Convective Momentum Transport (CMT). The finite-volume dynamical core with 26 vertical levels in aqua-planet mode is used at horizontal grid spacings of 1.0°, 0.5° and 0.25°. It is revealed that CAM 4 produces stronger and larger tropical cyclones by day 10 at all resolutions, with a much earlier onset of intensification when compared to CAM 3.1. At the highest resolution CAM 4 also accounts for changes in the storm's vertical structure, such as an increased outward slope of the wind contours with height, when compared to CAM 3.1. An investigation concludes that the new dilute CAPE calculation in CAM 4 is largely responsible for the changes observed in the development, strength and structure of the tropical cyclone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21402291','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21402291"><span>Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E</p> <p>2011-01-01</p> <p>Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950030670&hterms=SSM&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DSSM','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950030670&hterms=SSM&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DSSM"><span>The response of the SSM/I to the marine environment. Part 2: A parameterization of the effect of the sea surface slope distribution on emission and reflection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petty, Grant W.; Katsaros, Kristina B.</p> <p>1994-01-01</p> <p>Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170002533','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170002533"><span>Impact and Crashworthiness Characteristics of Venera Type Landers for Future Venus Missions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schroeder, Kevin; Bayandor, Javid; Samareh, Jamshid</p> <p>2016-01-01</p> <p>In this paper an in-depth investigation of the structural design of the Venera 9-14 landers is explored. A complete reverse engineering of the Venera lander was required. The lander was broken down into its fundamental components and analyzed. This provided in-sights into the hidden features of the design. A trade study was performed to find the sensitivity of the lander's overall mass to the variation of several key parameters. For the lander's legs, the location, length, configuration, and number are all parameterized. The size of the impact ring, the radius of the drag plate, and other design features are also parameterized, and all of these features were correlated to the change of mass of the lander. A multi-fidelity design tool used for further investigation of the parameterized lander was developed. As a design was passed down from one level to the next, the fidelity, complexity, accuracy, and run time of the model increased. The low-fidelity model was a highly nonlinear analytical model developed to rapidly predict the mass of each design. The medium and high fidelity models utilized an explicit finite element framework to investigate the performance of various landers upon impact with the surface under a range of landing conditions. This methodology allowed for a large variety of designs to be investigated by the analytical model, which identified designs with the optimum structural mass to payload ratio. As promising designs emerged, investigations in the following higher fidelity models were focused on establishing their reliability and crashworthiness. The developed design tool efficiently modelled and tested the best concepts for any scenario based on critical Venusian mission requirements and constraints. Through this program, the strengths and weaknesses inherent in the Venera-Type landers were thoroughly investigated. Key features identified for the design of robust landers will be used as foundations for the development of the next generation of landers for future exploration missions to Venus.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010NewA...15...24V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010NewA...15...24V"><span>The applicability of the viscous α-parameterization of gravitational instability in circumstellar disks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vorobyov, E. I.</p> <p>2010-01-01</p> <p>We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve the performance of α-models in the case of large ξ and even approximately reproduce the mass accretion burst phenomenon, the latter being a signature of the early gravitationally unstable stage of stellar evolution [Vorobyov, E.I., Basu, S., 2006. ApJ 650, 956]. However, further numerical experiments are needed to explore this issue.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S44A..05M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S44A..05M"><span>Ground Motion Prediction Equations Empowered by Stress Drop Measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyake, H.; Oth, A.</p> <p>2015-12-01</p> <p>Significant variation of stress drop is a crucial issue for ground motion prediction equations and probabilistic seismic hazard assessment, since only a few ground motion prediction equations take into account stress drop. In addition to average and sigma studies of stress drop and ground motion prediction equations (e.g., Cotton et al., 2013; Baltay and Hanks, 2014), we explore 1-to-1 relationship for each earthquake between stress drop and between-event residual of a ground motion prediction equation. We used the stress drop dataset of Oth (2013) for Japanese crustal earthquakes ranging 0.1 to 100 MPa and K-NET/KiK-net ground motion dataset against for several ground motion prediction equations with volcanic front treatment. Between-event residuals for ground accelerations and velocities are generally coincident with stress drop, as investigated by seismic intensity measures of Oth et al. (2015). Moreover, we found faster attenuation of ground acceleration and velocities for large stress drop events for the similar fault distance range and focal depth. It may suggest an alternative parameterization of stress drop to control attenuation distance rate for ground motion prediction equations. We also investigate 1-to-1 relationship and sigma for regional/national-scale stress drop variation and current national-scale ground motion equations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RSOS....470175D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RSOS....470175D"><span>Machine learning landscapes and predictions for patient outcomes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Das, Ritankar; Wales, David J.</p> <p>2017-07-01</p> <p>The theory and computational tools developed to interpret and explore energy landscapes in molecular science are applied to the landscapes defined by local minima for neural networks. These machine learning landscapes correspond to fits of training data, where the inputs are vital signs and laboratory measurements for a database of patients, and the objective is to predict a clinical outcome. In this contribution, we test the predictions obtained by fitting to single measurements, and then to combinations of between 2 and 10 different patient medical data items. The effect of including measurements over different time intervals from the 48 h period in question is analysed, and the most recent values are found to be the most important. We also compare results obtained for neural networks as a function of the number of hidden nodes, and for different values of a regularization parameter. The predictions are compared with an alternative convex fitting function, and a strong correlation is observed. The dependence of these results on the patients randomly selected for training and testing decreases systematically with the size of the database available. The machine learning landscapes defined by neural network fits in this investigation have single-funnel character, which probably explains why it is relatively straightforward to obtain the global minimum solution, or a fit that behaves similarly to this optimal parameterization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1227767-parameterizing-deep-convection-using-assumed-probability-density-function-method','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1227767-parameterizing-deep-convection-using-assumed-probability-density-function-method"><span>Parameterizing deep convection using the assumed probability density function method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Storer, R. L.; Griffin, B. M.; Höft, J.; ...</p> <p>2014-06-11</p> <p>Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1227792-parameterizing-deep-convection-using-assumed-probability-density-function-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1227792-parameterizing-deep-convection-using-assumed-probability-density-function-method"><span>Parameterizing deep convection using the assumed probability density function method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Storer, R. L.; Griffin, B. M.; Höft, J.</p> <p>2015-01-06</p> <p>Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1236922-parameterizing-deep-convection-using-assumed-probability-density-function-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1236922-parameterizing-deep-convection-using-assumed-probability-density-function-method"><span>Parameterizing deep convection using the assumed probability density function method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Storer, R. L.; Griffin, B. M.; Hoft, Jan</p> <p>2015-01-06</p> <p>Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/240886','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/240886"><span>A decision tool for selecting trench cap designs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Paige, G.B.; Stone, J.J.; Lane, L.J.</p> <p>1995-12-31</p> <p>A computer based prototype decision support system (PDSS) is being developed to assist the risk manager in selecting an appropriate trench cap design for waste disposal sites. The selection of the {open_quote}best{close_quote} design among feasible alternatives requires consideration of multiple and often conflicting objectives. The methodology used in the selection process consists of: selecting and parameterizing decision variables using data, simulation models, or expert opinion; selecting feasible trench cap design alternatives; ordering the decision variables and ranking the design alternatives. The decision model is based on multi-objective decision theory and uses a unique approach to order the decision variables andmore » rank the design alternatives. Trench cap designs are evaluated based on federal regulations, hydrologic performance, cover stability and cost. Four trench cap designs, which were monitored for a four year period at Hill Air Force Base in Utah, are used to demonstrate the application of the PDSS and evaluate the results of the decision model. The results of the PDSS, using both data and simulations, illustrate the relative advantages of each of the cap designs and which cap is the {open_quotes}best{close_quotes} alternative for a given set of criteria and a particular importance order of those decision criteria.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850009432','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850009432"><span>Results of data base management system parameterized performance testing related to GSFC scientific applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carchedi, C. H.; Gough, T. L.; Huston, H. A.</p> <p>1983-01-01</p> <p>The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A51E2109H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A51E2109H"><span>Multi-scale Modeling of Arctic Clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hillman, B. R.; Roesler, E. L.; Dexheimer, D.</p> <p>2017-12-01</p> <p>The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...854....8B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...854....8B"><span>Convective Dynamics and Disequilibrium Chemistry in the Atmospheres of Giant Planets and Brown Dwarfs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.</p> <p>2018-02-01</p> <p>Disequilibrium chemical processes significantly affect the spectra of substellar objects. To study these effects, dynamical disequilibrium has been parameterized using the quench and eddy diffusion approximations, but little work has been done to explore how these approximations perform under realistic planetary conditions in different dynamical regimes. As a first step toward addressing this problem, we study the localized, small-scale convective dynamics of planetary atmospheres by direct numerical simulation of fully compressible hydrodynamics with reactive tracers using the Dedalus code. Using polytropically stratified, plane-parallel atmospheres in 2D and 3D, we explore the quenching behavior of different abstract chemical species as a function of the dynamical conditions of the atmosphere as parameterized by the Rayleigh number. We find that in both 2D and 3D, chemical species quench deeper than would be predicted based on simple mixing-length arguments. Instead, it is necessary to employ length scales based on the chemical equilibrium profile of the reacting species in order to predict quench points and perform chemical kinetics modeling in 1D. Based on the results of our simulations, we provide a new length scale, derived from the chemical scale height, that can be used to perform these calculations. This length scale is simple to calculate from known chemical data and makes reasonable predictions for our dynamical simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70185342','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70185342"><span>A model to inform management actions as a response to chytridiomycosis-associated decline</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Converse, Sarah J.; Bailey, Larissa L.; Mosher, Brittany A.; Funk, W. Chris; Gerber, Brian D.; Muths, Erin L.</p> <p>2017-01-01</p> <p>Decision-analytic models provide forecasts of how systems of interest will respond to management. These models can be parameterized using empirical data, but sometimes require information elicited from experts. When evaluating the effects of disease in species translocation programs, expert judgment is likely to play a role because complete empirical information will rarely be available. We illustrate development of a decision-analytic model built to inform decision-making regarding translocations and other management actions for the boreal toad (Anaxyrus boreas boreas), a species with declines linked to chytridiomycosis caused by Batrachochytrium dendrobatidis (Bd). Using the model, we explored the management implications of major uncertainties in this system, including whether there is a genetic basis for resistance to pathogenic infection by Bd, how translocation can best be implemented, and the effectiveness of efforts to reduce the spread of Bd. Our modeling exercise suggested that while selection for resistance to pathogenic infectionDecision-analytic models provide forecasts of how systems of interest will respond to management. These models can be parameterized using empirical data, but sometimes require information elicited from experts. When evaluating the effects of disease in species translocation programs, expert judgment is likely to play a role because complete empirical information will rarely be available. We illustrate development of a decision-analytic model built to inform decision-making regarding translocations and other management actions for the boreal toad (Anaxyrus boreas boreas), a species with declines linked to chytridiomycosis caused by Batrachochytrium dendrobatidis (Bd). Using the model, we explored the management implications of major uncertainties in this system, including whether there is a genetic basis for resistance to pathogenic infection by Bd, how translocation can best be implemented, and the effectiveness of efforts to reduce the spread of Bd. Our modeling exercise suggested that while selection for resistance to pathogenic infection by Bd could increase numbers of sites occupied by toads, and translocations could increase the rate of toad recovery, efforts to reduce the spread of Bd may have little effect. We emphasize the need to continue developing and parameterizing models necessary to assess management actions for combating chytridiomycosis-associated declines. by Bd could increase numbers of sites occupied by toads, and translocations could increase the rate of toad recovery, efforts to reduce the spread of Bd may have little effect. We emphasize the need to continue developing and parameterizing models necessary to assess management actions for combating chytridiomycosis-associated declines.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=characteristics+AND+Research+AND+qualitative&pg=6&id=ED577400','ERIC'); return false;" href="https://eric.ed.gov/?q=characteristics+AND+Research+AND+qualitative&pg=6&id=ED577400"><span>A Multi-Case Study of Professional Ethics in Alternative Education: Exploring Perspectives of Alternative School Administrators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Duke, Richard T. RT, IV</p> <p>2017-01-01</p> <p>This qualitative case study explored perspectives of alternative school leaders regarding professional ethics and standards. The study researched two components of alternative school leadership: effective alternative school characteristics based on professional standards and making decisions around the best interests of students. This study…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H13P..07E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H13P..07E"><span>Understanding the Impact of Ground Water Treatment and Evapotranspiration Parameterizations in the NCEP Climate Forecast System (CFS) on Warm Season Predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ek, M. B.; Yang, R.</p> <p>2016-12-01</p> <p>Skillful short-term weather forecasts, which rely heavily on quality atmospheric initial conditions, have a fundamental limit of about two weeks owing to the chaotic nature of the atmosphere. Useful forecasts at sub-seasonal to seasonal time scales, on the other hand, require well-simulated large-scale atmospheric response to slowly varying lower boundary forcings from both the ocean and land surface. The critical importance of ocean has been recognized, where the ocean indices have been used in a variety of climate applications. In contrast, the impact of land surface anomalies, especially soil moisture and associated evaporation, has been proven notably difficult to demonstrate. The Noah Land Surface Model (LSM) is the land component of NCEP CFS version 2 (CFSv2) used for seasonal predictions. The Noah LSM originates from the Oregon State University (OSU) LSM. The evaporation control in the Noah LSM is based on the Penman-Monteith equation, which takes into account the solar radiation, relative humidity, air temperature, and soil moisture effects. The Noah LSM is configured with four soil layers with a fixed depth of 2 meters and free drainage at the bottom soil layer. This treatment assumes that the soil water table depth is well within the specified range, and also potentially misrepresents the soil moisture memory effects at seasonal time scales. To overcome the limitation, an unconfined aquifer is attached to the bottom of the soil to allow the water table to move freely up and down. In addition, in conjunction with the water table, an alternative Ball-Berry photosynthesis-based evaporation parameterization is examined to evaluate the impact from using a different evaporation control methodology. Focusing on the 2011 and 2012 intense summer droughts in the central US, seasonal ensemble forecast experiments with early May initial conditions are carried out for the two years using an enhanced version of CFSv2, where the atmospheric component of the CFSv2 is coupled to the Noah Multiple-Parameterization (Noah-MP) land model. The Noah-MP has different options for ground water and evaporation control parameterizations. The differences will be presented and results will be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..117.7203Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..117.7203Y"><span>Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yun, Yuxing; Penner, Joyce E.</p> <p>2012-04-01</p> <p>A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.A13J3306N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.A13J3306N"><span>Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nelson, K.; Mechem, D. B.</p> <p>2014-12-01</p> <p>Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1960i0006I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1960i0006I"><span>Data-driven in computational plasticity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ibáñez, R.; Abisset-Chavanne, E.; Cueto, E.; Chinesta, F.</p> <p>2018-05-01</p> <p>Computational mechanics is taking an enormous importance in industry nowadays. On one hand, numerical simulations can be seen as a tool that allows the industry to perform fewer experiments, reducing costs. On the other hand, the physical processes that are intended to be simulated are becoming more complex, requiring new constitutive relationships to capture such behaviors. Therefore, when a new material is intended to be classified, an open question still remains: which constitutive equation should be calibrated. In the present work, the use of model order reduction techniques are exploited to identify the plastic behavior of a material, opening an alternative route with respect to traditional calibration methods. Indeed, the main objective is to provide a plastic yield function such that the mismatch between experiments and simulations is minimized. Therefore, once the experimental results just like the parameterization of the plastic yield function are provided, finding the optimal plastic yield function can be seen either as a traditional optimization or interpolation problem. It is important to highlight that the dimensionality of the problem is equal to the number of dimensions related to the parameterization of the yield function. Thus, the use of sparse interpolation techniques seems almost compulsory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.A13G0320V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.A13G0320V"><span>Impacts of an offshore wind farm on the lower marine atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Volker, P. J.; Huang, H.; Capps, S. B.; Badger, J.; Hahmann, A. N.; Hall, A. D.</p> <p>2013-12-01</p> <p>Due to a continuing increase in energy demand and heightened environmental consciousness, the State of California is seeking out more environmentally-friendly energy resources. Strong and persistent winds along California's coast can be harnessed effectively by current wind turbine technology, providing a promising source of alternative energy. Using an advanced wind farm parameterization implemented in the Weather Research & Forecast model, we investigate the potential impacts of a large offshore wind farm on the lower marine atmosphere. Located offshore of the Sonoma Coast in northern California, this theoretical wind farm includes 200-7 megawatt, 125 m hub height wind turbines which are able to provide a total of 1.4 TW of power for use in neighboring cities. The wind turbine model (i.e., the Explicit Wake Parameterization originally developed at the Danish Technical University) acts as a source of drag where the sub-grid scale velocity deficit expansion is explicitly described. A swath consisting of hub-height velocity deficits and temperature and moisture anomalies extends more than 100 km downstream of the wind farm location. The presence of the large modern wind farm also creates flow distortion upstream in conjunction with an enhanced vertical momentum and scalar transport.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5784416','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5784416"><span>Hindcasting the Madden‐Julian Oscillation With a New Parameterization of Surface Heat Fluxes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Jingfeng; Lin, Wenshi</p> <p>2017-01-01</p> <p>Abstract The recently developed maximum entropy production (MEP) model, an alternative parameterization of surface heat fluxes, is incorporated into the Weather Research and Forecasting (WRF) model. A pair of WRF cloud‐resolving experiments (5 km grids) using the bulk transfer model (WRF default) and the MEP model of surface heat fluxes are performed to hindcast the October Madden‐Julian oscillation (MJO) event observed during the 2011 Dynamics of the MJO (DYNAMO) field campaign. The simulated surface latent and sensible heat fluxes in the MEP and bulk transfer model runs are in general consistent with in situ observations from two research vessels. Compared to the bulk transfer model, the convection envelope is strengthened in the MEP run and shows a more coherent propagation over the Maritime Continent. The simulated precipitable water in the MEP run is in closer agreement with the observations. Precipitation in the MEP run is enhanced during the active phase of the MJO with significantly reduced regional dry and wet biases. Large‐scale ocean evaporation is stronger in the MEP run leading to stronger boundary layer moistening to the east of the convection center, which facilitates the eastward propagation of the MJO. PMID:29399269</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.146w4304A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.146w4304A"><span>Dissecting the accountability of parameterized and parameter-free single-hybrid and double-hybrid functionals for photophysical properties of TADF-based OLEDs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alipour, Mojtaba; Karimi, Niloofar</p> <p>2017-06-01</p> <p>Organic light emitting diodes (OLEDs) based on thermally activated delayed fluorescence (TADF) emitters are an attractive category of materials that have witnessed a booming development in recent years. In the present contribution, we scrutinize the accountability of parameterized and parameter-free single-hybrid (SH) and double-hybrid (DH) functionals through the two formalisms, full time-dependent density functional theory (TD-DFT) and Tamm-Dancoff approximation (TDA), for the estimation of photophysical properties like absorption energy, emission energy, zero-zero transition energy, and singlet-triplet energy splitting of TADF molecules. According to our detailed analyses on the performance of SHs based on TD-DFT and TDA, the TDA-based parameter-free SH functionals, PBE0 and TPSS0, with one-third of exact-like exchange turned out to be the best performers in comparison to other functionals from various rungs to reproduce the experimental data of the benchmarked set. Such affordable SH approximations can thus be employed to predict and design the TADF molecules with low singlet-triplet energy gaps for OLED applications. From another perspective, considering this point that both the nonlocal exchange and correlation are essential for a more reliable description of large charge-transfer excited states, applicability of the functionals incorporating these terms, namely, parameterized and parameter-free DHs, has also been evaluated. Perusing the role of exact-like exchange, perturbative-like correlation, solvent effects, and other related factors, we find that the parameterized functionals B2π-PLYP and B2GP-PLYP and the parameter-free models PBE-CIDH and PBE-QIDH have respectable performance with respect to others. Lastly, besides the recommendation of reliable computational protocols for the purpose, hopefully this study can pave the way toward further developments of other SHs and DHs for theoretical explorations in the field of OLEDs technology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19567419','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19567419"><span>Simulating carbon dioxide exchange rates of deciduous tree species: evidence for a general pattern in biochemical changes and water stress response.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Reynolds, Robert F; Bauerle, William L; Wang, Ying</p> <p>2009-09-01</p> <p>Deciduous trees have a seasonal carbon dioxide exchange pattern that is attributed to changes in leaf biochemical properties. However, it is not known if the pattern in leaf biochemical properties - maximum Rubisco carboxylation (V(cmax)) and electron transport (J(max)) - differ between species. This study explored whether a general pattern of changes in V(cmax), J(max), and a standardized soil moisture response accounted for carbon dioxide exchange of deciduous trees throughout the growing season. The model MAESTRA was used to examine V(cmax) and J(max) of leaves of five deciduous trees, Acer rubrum 'Summer Red', Betula nigra, Quercus nuttallii, Quercus phellos and Paulownia elongata, and their response to soil moisture. MAESTRA was parameterized using data from in situ measurements on organs. Linking the changes in biochemical properties of leaves to the whole tree, MAESTRA integrated the general pattern in V(cmax) and J(max) from gas exchange parameters of leaves with a standardized soil moisture response to describe carbon dioxide exchange throughout the growing season. The model estimates were tested against measurements made on the five species under both irrigated and water-stressed conditions. Measurements and modelling demonstrate that the seasonal pattern of biochemical activity in leaves and soil moisture response can be parameterized with straightforward general relationships. Over the course of the season, differences in carbon exchange between measured and modelled values were within 6-12 % under well-watered conditions and 2-25 % under water stress conditions. Hence, a generalized seasonal pattern in the leaf-level physiological change of V(cmax) and J(max), and a standardized response to soil moisture was sufficient to parameterize carbon dioxide exchange for large-scale evaluations. Simplification in parameterization of the seasonal pattern of leaf biochemical activity and soil moisture response of deciduous forest species is demonstrated. This allows reliable modelling of carbon exchange for deciduous trees, thus circumventing the need for extensive gas exchange experiments on different species.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120000927','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120000927"><span>Adaptive Aft Signature Shaping of a Low-Boom Supersonic Aircraft Using Off-Body Pressures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ordaz, Irian; Li, Wu</p> <p>2012-01-01</p> <p>The design and optimization of a low-boom supersonic aircraft using the state-of-the- art o -body aerodynamics and sonic boom analysis has long been a challenging problem. The focus of this paper is to demonstrate an e ective geometry parameterization scheme and a numerical optimization approach for the aft shaping of a low-boom supersonic aircraft using o -body pressure calculations. A gradient-based numerical optimization algorithm that models the objective and constraints as response surface equations is used to drive the aft ground signature toward a ramp shape. The design objective is the minimization of the variation between the ground signature and the target signature subject to several geometric and signature constraints. The target signature is computed by using a least-squares regression of the aft portion of the ground signature. The parameterization and the deformation of the geometry is performed with a NASA in- house shaping tool. The optimization algorithm uses the shaping tool to drive the geometric deformation of a horizontal tail with a parameterization scheme that consists of seven camber design variables and an additional design variable that describes the spanwise location of the midspan section. The demonstration cases show that numerical optimization using the state-of-the-art o -body aerodynamic calculations is not only feasible and repeatable but also allows the exploration of complex design spaces for which a knowledge-based design method becomes less effective.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1334372','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1334372"><span>Final Technical Report for Department of Energy Award DE-SC0010857, “Towards parameterization of root-rock hydrologic interactions in the Earth System Model”</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Fung, Inez</p> <p>2016-12-05</p> <p>The goal of the project is to understand how plants survive droughts, as the decimation of transpiration could shift the surface energy balance from latent heat to sensible heat, leading to warming of the lower atmosphere and amplification of drought. The hypothesis we investigated is that there is a store of moisture in the weathered bedrock below the organic soil mantle, so that plants that have roots deep enough to access this moisture reservoir could survive drought. We developed a new stochastic parameterization of hydraulic conductivity that introduces heterogeneity into the traditional formulation and captures the preferential flow through themore » weathered bedrock (Vrettas and Fung, 2015). With the new parameterization in the Richards equation, we succeeded in reproducing the fluctuations of the water table in seven well locations over six years. We also and carried out a series of model experiments that explore how subsurface properties impact evapotranspiration (ET) in a Mediterranean climate where a significant portion of ET is observed to take place in the dry and sunny summer when the precipitation is insufficient to meet the demand. Our results show that hydraulic redistribution is important for sustaining ET in the dry seasons when the vertical gradient in water potential is large. The results highlight the importance of lithology, species composition and root function for ET, especially under dry conditions.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JGlac..55..292M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JGlac..55..292M"><span>Solar radiation, cloudiness and longwave radiation over low-latitude glaciers: implications for mass-balance modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mölg, Thomas; Cullen, Nicolas J.; Kaser, Georg</p> <p></p> <p>Broadband radiation schemes (parameterizations) are commonly used tools in glacier mass-balance modelling, but their performance at high altitude in the tropics has not been evaluated in detail. Here we take advantage of a high-quality 2 year record of global radiation (G) and incoming longwave radiation (L↓) measured on Kersten Glacier, Kilimanjaro, East Africa, at 5873 m a.s.l., to optimize parameterizations of G and L↓. We show that the two radiation terms can be related by an effective cloud-cover fraction neff, so G or L↓ can be modelled based on neff derived from measured L↓ or G, respectively. At neff = 1, G is reduced to 35% of clear-sky G, and L↓ increases by 45-65% (depending on altitude) relative to clear-sky L↓. Validation for a 1 year dataset of G and L↓ obtained at 4850 m on Glaciar Artesonraju, Peruvian Andes, yields a satisfactory performance of the radiation scheme. Whether this performance is acceptable for mass-balance studies of tropical glaciers is explored by applying the data from Glaciar Artesonraju to a physically based mass-balance model, which requires, among others, G and L↓ as forcing variables. Uncertainties in modelled mass balance introduced by the radiation parameterizations do not exceed those that can be caused by errors in the radiation measurements. Hence, this paper provides a tool for inclusion in spatially distributed mass-balance modelling of tropical glaciers and/or extension of radiation data when only G or L↓ is measured.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.8337B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.8337B"><span>The Effect of an Increased Convective Entrainment Rate on Indian Monsoon Biases in the Met Office Unified Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bush, Stephanie; Turner, Andrew; Woolnough, Steve; Martin, Gill</p> <p>2013-04-01</p> <p>Global circulation models (GCMs) are a key tool for understanding and predicting monsoon rainfall, now and under future climate change. However, many GCMs show significant, systematic biases in their simulation of monsoon rainfall and dynamics that spin up over very short time scales and persist in the climate mean state. We describe several of these biases as simulated in the Met Office Unified Model and show they are sensitive to changes in the convective parameterization's entrainment rate. To improve our understanding of the biases and inform efforts to improve convective parameterizations, we explore the reasons for this sensitivity. We show the results of experiments where we increase the entrainment rate in regions of especially large bias: the western equatorial Indian Ocean, western north Pacific and India itself. We use the results to determine whether improvements in biases are due to the local increase in entrainment or are the remote response of the entrainment increase elsewhere in the GCM. We find that feedbacks usually strengthen the local response, but the local response leads to a different mean state change in different regions. We also show results from experiments which demonstrate the spin-up of the local response, which we use to further understand the response in complex regions such as the Western North Pacific. Our work demonstrates that local application of parameterization changes is a powerful tool for understanding their global impact.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GBioC..32..565B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GBioC..32..565B"><span>Dynamic Biological Functioning Important for Simulating and Stabilizing Ocean Biogeochemistry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Buchanan, P. J.; Matear, R. J.; Chase, Z.; Phipps, S. J.; Bindoff, N. L.</p> <p>2018-04-01</p> <p>The biogeochemistry of the ocean exerts a strong influence on the climate by modulating atmospheric greenhouse gases. In turn, ocean biogeochemistry depends on numerous physical and biological processes that change over space and time. Accurately simulating these processes is fundamental for accurately simulating the ocean's role within the climate. However, our simulation of these processes is often simplistic, despite a growing understanding of underlying biological dynamics. Here we explore how new parameterizations of biological processes affect simulated biogeochemical properties in a global ocean model. We combine 6 different physical realizations with 6 different biogeochemical parameterizations (36 unique ocean states). The biogeochemical parameterizations, all previously published, aim to more accurately represent the response of ocean biology to changing physical conditions. We make three major findings. First, oxygen, carbon, alkalinity, and phosphate fields are more sensitive to changes in the ocean's physical state. Only nitrate is more sensitive to changes in biological processes, and we suggest that assessment protocols for ocean biogeochemical models formally include the marine nitrogen cycle to assess their performance. Second, we show that dynamic variations in the production, remineralization, and stoichiometry of organic matter in response to changing environmental conditions benefit the simulation of ocean biogeochemistry. Third, dynamic biological functioning reduces the sensitivity of biogeochemical properties to physical change. Carbon and nitrogen inventories were 50% and 20% less sensitive to physical changes, respectively, in simulations that incorporated dynamic biological functioning. These results highlight the importance of a dynamic biology for ocean properties and climate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A11E0098S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A11E0098S"><span>Analysis of Surface Heterogeneity Effects with Mesoscale Terrestrial Modeling Platforms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Simmer, C.</p> <p>2015-12-01</p> <p>An improved understanding of the full variability in the weather and climate system is crucial for reducing the uncertainty in weather forecasting and climate prediction, and to aid policy makers to develop adaptation and mitigation strategies. A yet unknown part of uncertainty in the predictions from the numerical models is caused by the negligence of non-resolved land surface heterogeneity and the sub-surface dynamics and their potential impact on the state of the atmosphere. At the same time, mesoscale numerical models using finer horizontal grid resolution [O(1)km] can suffer from inconsistencies and neglected scale-dependencies in ABL parameterizations and non-resolved effects of integrated surface-subsurface lateral flow at this scale. Our present knowledge suggests large-eddy-simulation (LES) as an eventual solution to overcome the inadequacy of the physical parameterizations in the atmosphere in this transition scale, yet we are constrained by the computational resources, memory management, big-data, when using LES for regional domains. For the present, there is a need for scale-aware parameterizations not only in the atmosphere but also in the land surface and subsurface model components. In this study, we use the recently developed Terrestrial Systems Modeling Platform (TerrSysMP) as a numerical tool to analyze the uncertainty in the simulation of surface exchange fluxes and boundary layer circulations at grid resolutions of the order of 1km, and explore the sensitivity of the atmospheric boundary layer evolution and convective rainfall processes on land surface heterogeneity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A11D0041S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A11D0041S"><span>Constraining global dry deposition of ozone: observations and modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Silva, S. J.; Heald, C. L.</p> <p>2016-12-01</p> <p>Ozone loss through dry deposition to vegetation is a critically important process for both air quality and ecosystem health. Current estimates are that nearly 25% of all surface ozone is destroyed through dry deposition, and billions of dollars are lost annually due to losses of ecosystem services and agricultural yield associated with ozone damage. However there are still substantial uncertainties regarding the spatial distribution and magnitude of the global depositional flux. As land cover change continues throughout this century, dry deposition of ozone will change in ways that are yet still poorly understood. Nearly every major atmospheric chemistry model uses a variation of the "resistor in series parameterization" for the calculation of dry deposition. By far the most commonly implemented parameterization is of the form presented in Wesely (1989), and is dependent on many variables, including land type look up tables, solar radiation, leaf area index, temperature, and more. The uncertainties contained within the various parts of this parameterization have to date not been fully explored. A lack of understanding of these uncertainties, coupled with a dearth of routine measurements of ozone deposition, ultimately challenges our ability to understand the impacts of land cover change on surface ozone. In this work, we use a suite of globally-distributed observations from the past two decades and the GEOS-Chem chemical transport model to constrain global dry deposition, improve our understanding of these uncertainties, and contextualize the impact of land cover change on ozone concentrations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A24F..02L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A24F..02L"><span>Aerosol-cloud interactions in a multi-scale modeling framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lin, G.; Ghan, S. J.</p> <p>2017-12-01</p> <p>Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the two simulations due to the difference in the cloud droplet lifetime. Next, we will explore how the ECEP treatment affects the anthropogenic aerosol forcing, particularly the aerosol indirect forcing, by comparing present-day and pre-industrial simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1003146','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1003146"><span>The Berkeley Out-of-Order Machine (BOOM): An Industry-Competitive, Synthesizable, Parameterized RISC-V Processor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-06-13</p> <p>The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018TCry...12.1969R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018TCry...12.1969R"><span>Antarctic sub-shelf melt rates via PICO</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reese, Ronja; Albrecht, Torsten; Mengel, Matthias; Asay-Davis, Xylar; Winkelmann, Ricarda</p> <p>2018-06-01</p> <p>Ocean-induced melting below ice shelves is one of the dominant drivers for mass loss from the Antarctic Ice Sheet at present. An appropriate representation of sub-shelf melt rates is therefore essential for model simulations of marine-based ice sheet evolution. Continental-scale ice sheet models often rely on simple melt-parameterizations, in particular for long-term simulations, when fully coupled ice-ocean interaction becomes computationally too expensive. Such parameterizations can account for the influence of the local depth of the ice-shelf draft or its slope on melting. However, they do not capture the effect of ocean circulation underneath the ice shelf. Here we present the Potsdam Ice-shelf Cavity mOdel (PICO), which simulates the vertical overturning circulation in ice-shelf cavities and thus enables the computation of sub-shelf melt rates consistent with this circulation. PICO is based on an ocean box model that coarsely resolves ice shelf cavities and uses a boundary layer melt formulation. We implement it as a module of the Parallel Ice Sheet Model (PISM) and evaluate its performance under present-day conditions of the Southern Ocean. We identify a set of parameters that yield two-dimensional melt rate fields that qualitatively reproduce the typical pattern of comparably high melting near the grounding line and lower melting or refreezing towards the calving front. PICO captures the wide range of melt rates observed for Antarctic ice shelves, with an average of about 0.1 m a-1 for cold sub-shelf cavities, for example, underneath Ross or Ronne ice shelves, to 16 m a-1 for warm cavities such as in the Amundsen Sea region. This makes PICO a computationally feasible and more physical alternative to melt parameterizations purely based on ice draft geometry.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRA..122.4846Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRA..122.4846Y"><span>Influence of parameterized small-scale gravity waves on the migrating diurnal tide in Earth's thermosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yiǧit, Erdal; Medvedev, Alexander S.</p> <p>2017-04-01</p> <p>Effects of subgrid-scale gravity waves (GWs) on the diurnal migrating tides are investigated from the mesosphere to the upper thermosphere for September equinox conditions, using a general circulation model coupled with the extended spectral nonlinear GW parameterization of Yiğit et al. (<link href="#jgra53482-bib-0076"/>). Simulations with GW effects cut off above the turbopause and included in the entire thermosphere have been conducted. GWs appreciably impact the mean circulation and cool the thermosphere down by up to 12-18%. GWs significantly affect the winds modulated by the diurnal migrating tide, in particular, in the low-latitude mesosphere and lower thermosphere and in the high-latitude thermosphere. These effects depend on the mutual correlation of the diurnal phases of the GW forcing and tides: GWs can either enhance or reduce the tidal amplitude. In the low-latitude MLT, the correlation between the direction of the deposited GW momentum and the tidal phase is positive due to propagation of a broad spectrum of GW harmonics through the alternating winds. In the Northern Hemisphere high-latitude thermosphere, GWs act against the tide due to an anticorrelation of tidal wind and GW momentum, while in the Southern high-latitudes they weakly enhance the tidal amplitude via a combination of a partial correlation of phases and GW-induced changes of the circulation. The variable nature of GW effects on the thermal tide can be captured in GCMs provided that a GW parameterization (1) considers a broad spectrum of harmonics, (2) properly describes their propagation, and (3) correctly accounts for the physics of wave breaking/saturation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.9031X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.9031X"><span>Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Kuan-Man; Cheng, Anning</p> <p>2014-05-01</p> <p>A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three models (CAM5, CAM5-IPHOC and SPCAM-IPHOC), with emphasis on low-level clouds and precipitation. Detailed comparisons of scatter diagrams among the monthly-mean low-level cloudiness, PBL height, surface relative humidity and lower tropospheric stability (LTS) reveal the relative strengths and weaknesses for five coastal low-cloud regions among the three models. Observations from CloudSat and CALIPSO and ECMWF Interim reanalysis are used as the truths for the comparisons. We found that the standard CAM5 underestimates cloudiness and produces small cloud fractions at low PBL heights that contradict with observations. CAM5-IPHOC tends to overestimate low clouds but the ranges of LTS and PBL height variations are most realistic. SPCAM-IPHOC seems to produce most realistic results with relatively consistent results from one region to another. Further comparisons with other atmospheric environmental variables will be helpful to reveal the causes of model deficiencies so that SPCAM-IPHOC results will provide guidance to the other two models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003EAEJA.....3278K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003EAEJA.....3278K"><span>Toward computational models of magma genesis and geochemical transport in subduction zones</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Katz, R.; Spiegelman, M.</p> <p>2003-04-01</p> <p>The chemistry of material erupted from subduction-related volcanoes records important information about the processes that lead to its formation at depth in the Earth. Self-consistent numerical simulations provide a useful tool for interpreting this data as they can explore the non-linear feedbacks between processes that control the generation and transport of magma. A model capable of addressing such issues should include three critical components: (1) a variable viscosity solid flow solver with smooth and accurate pressure and velocity fields, (2) a parameterization of mass transfer reactions between the solid and fluid phases and (3) a consistent fluid flow and reactive transport code. We report on progress on each of these parts. To handle variable-viscosity solid-flow in the mantle wedge, we are adapting a Patankar-based FAS multigrid scheme developed by Albers (2000, J. Comp. Phys.). The pressure field in this scheme is the solution to an elliptic equation on a staggered grid. Thus we expect computed pressure fields to have smooth gradient fields suitable for porous flow calculations, unlike those of commonly used penalty-method schemes. Use of a temperature and strain-rate dependent mantle rheology has been shown to have important consequences for the pattern of flow and the temperature structure in the wedge. For computing thermal structure we present a novel scheme that is a hybrid of Crank-Nicholson (CN) and Semi-Lagrangian (SL) methods. We have tested the SLCN scheme on advection across a broad range of Peclet numbers and show the results. This scheme is also useful for low-diffusivity chemical transport. We also describe our parameterization of hydrous mantle melting [Katz et. al., G3, 2002 in review]. This parameterization is designed to capture the melting behavior of peridotite--water systems over parameter ranges relevant to subduction. The parameterization incorporates data and intuition gained from laboratory experiments and thermodynamic calculations yet it remains flexible and computationally efficient. Given accurate solid-flow fields, a parameterization of hydrous melting and a method for calculating thermal structure (enforcing energy conservation), the final step is to integrate these components into a consistent framework for reactive-flow and chemical transport in deformable porous media. We present preliminary results for reactive flow in 2-D static and upwelling columns and discuss possible mechanical and chemical consequences of open system reactive melting with application to arcs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JHyd..522..522H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JHyd..522..522H"><span>Evaluation of different parameterizations of the spatial heterogeneity of subsurface storage capacity for hourly runoff simulation in boreal mountainous watershed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur</p> <p>2015-03-01</p> <p>Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ThApC.tmp...66G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ThApC.tmp...66G"><span>Comprehensive assessment of parameterization methods for estimating clear-sky surface downward longwave radiation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Yamin; Cheng, Jie; Liang, Shunlin</p> <p>2018-02-01</p> <p>Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70159346','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70159346"><span>Evaluation of wave runup predictions from numerical and parametric models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.</p> <p>2014-01-01</p> <p>Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18704927','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18704927"><span>Building alternate protein structures using the elastic network model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Qingyi; Sharp, Kim A</p> <p>2009-02-15</p> <p>We describe a method for efficiently generating ensembles of alternate, all-atom protein structures that (a) differ significantly from the starting structure, (b) have good stereochemistry (bonded geometry), and (c) have good steric properties (absence of atomic overlap). The method uses reconstruction from a series of backbone framework structures that are obtained from a modified elastic network model (ENM) by perturbation along low-frequency normal modes. To ensure good quality backbone frameworks, the single force parameter ENM is modified by introducing two more force parameters to characterize the interaction between the consecutive carbon alphas and those within the same secondary structure domain. The relative stiffness of the three parameters is parameterized to reproduce B-factors, while maintaining good bonded geometry. After parameterization, violations of experimental Calpha-Calpha distances and Calpha-Calpha-Calpha pseudo angles along the backbone are reduced to less than 1%. Simultaneously, the average B-factor correlation coefficient improves to R = 0.77. Two applications illustrate the potential of the approach. (1) 102,051 protein backbones spanning a conformational space of 15 A root mean square deviation were generated from 148 nonredundant proteins in the PDB database, and all-atom models with minimal bonded and nonbonded violations were produced from this ensemble of backbone structures using the SCWRL side chain building program. (2) Improved backbone templates for homology modeling. Fifteen query sequences were each modeled on two targets. For each of the 30 target frameworks, dozens of improved templates could be produced In all cases, improved full atom homology models resulted, of which 50% could be identified blind using the D-Fire statistical potential. (c) 2008 Wiley-Liss, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H52A..05N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H52A..05N"><span>SUMMA and Model Mimicry: Understanding Differences Among Land Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.</p> <p>2016-12-01</p> <p>Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23551417','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23551417"><span>Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Robinson, John D; Hall, David W; Wares, John P</p> <p>2013-05-01</p> <p>Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150002721','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150002721"><span>Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chao, Winston C.</p> <p>2015-01-01</p> <p>The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900058510&hterms=RELATIVITY&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DRELATIVITY','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900058510&hterms=RELATIVITY&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DRELATIVITY"><span>Testing general relativity in space-borne and astronomical laboratories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Will, Clifford M.</p> <p>1989-01-01</p> <p>The current status of space-based experiments and astronomical observations designed to test the theory of general relativity is surveyed. Consideration is given to tests of post-Newtonian gravity, searches for feeble short-range forces and gravitomagnetism, improved measurements of parameterized post-Newtonian parameter values, explorations of post-Newtonian physics, tests of the Einstein equivalence principle, observational tests of post-Newtonian orbital effects, and efforts to detect quadrupole and dipole radiation damping. Recent numerical results are presented in tables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OcMod.115...42P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OcMod.115...42P"><span>Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank</p> <p>2017-07-01</p> <p>Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1337651-evaluation-surface-flux-parameterizations-long-term-arm-observations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1337651-evaluation-surface-flux-parameterizations-long-term-arm-observations"><span>Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Liu, Gang; Liu, Yangang; Endo, Satoshi</p> <p>2013-02-01</p> <p>Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1425461-towards-improved-parameterization-macroscale-hydrologic-model-discontinuous-permafrost-boreal-forest-ecosystem','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1425461-towards-improved-parameterization-macroscale-hydrologic-model-discontinuous-permafrost-boreal-forest-ecosystem"><span>Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...</p> <p>2017-09-14</p> <p>Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1425461','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1425461"><span>Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.</p> <p></p> <p>Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ChJME..26..784L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ChJME..26..784L"><span>Crashworthiness analysis on alternative square honeycomb structure under axial loading</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Meng; Deng, Zongquan; Guo, Hongwei; Liu, Rongqiang; Ding, Beichen</p> <p>2013-07-01</p> <p>Hexagonal metal honeycomb is widely used in energy absorption field for its special construction. However, many other metal honeycomb structures also show good energy absorption characteristics. Currently, most of the researches focus on hexagonal honeycomb, while few are performed into different honeycomb structures. Therefore, a new alternative square honeycomb is developed to expand the non-hexagonal metal honeycomb applications in the energy absorption fields with the aim of designing low mass and low volume energy absorbers. The finite element model of alternative square honeycomb is built to analyze its specific energy absorption property. As the diversity of honeycomb structure, the parameterized metal honeycomb finite element analysis program is conducted based on PCL language. That program can automatically create finite element model. Numerical results show that with the same foil thickness and cell length of metal honeycomb, the alternative square has better specific energy absorption than hexagonal honeycomb. Using response surface method, the mathematical formulas of honeycomb crashworthiness properties are obtained and optimization is done to get the maximum specific energy absorption property honeycomb. Optimal results demonstrate that to absorb same energy, alternative square honeycomb can save 10% volume of buffer structure than hexagonal honeycomb can do. This research is significant in providing technical support in the extended application of different honeycomb used as crashworthiness structures, and is absolutely essential in low volume and low mass energy absorber design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840008568','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840008568"><span>Extensions and applications of a second-order landsurface parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Andreou, S. A.; Eagleson, P. S.</p> <p>1983-01-01</p> <p>Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020012439','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020012439"><span>Development and Testing of Coupled Land-surface, PBL and Shallow/Deep Convective Parameterizations within the MM5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.</p> <p>2000-01-01</p> <p>The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGP31A..04F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGP31A..04F"><span>Electromagnetic forward modelling for realistic Earth models using unstructured tetrahedral meshes and a meshfree approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.</p> <p>2017-12-01</p> <p>Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which makes the process of building a geophysical Earth model from a geological model much simpler. In this presentation we will explore the issues that arise when working with realistic Earth models and when synthesizing geophysical electromagnetic data for them. We briefly consider meshfree methods as a possible means of alleviating some of these issues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=square&id=EJ963832','ERIC'); return false;" href="https://eric.ed.gov/?q=square&id=EJ963832"><span>A Simple Parameterization of 3 x 3 Magic Squares</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich</p> <p>2012-01-01</p> <p>In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMNG24A..01B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMNG24A..01B"><span>Dynamically Consistent Parameterization of Mesoscale Eddies This work aims at parameterization of eddy effects for use in non-eddy-resolving ocean models and focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berloff, P. S.</p> <p>2016-12-01</p> <p>This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70189185','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70189185"><span>The effects of changing land cover on streamflow simulation in Puerto Rico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Van Beusekom, Ashley E.; Hay, Lauren E.; Viger, Roland; Gould, William A.; Collazo, Jaime; Henareh Khalyani, Azad</p> <p>2014-01-01</p> <p>This study quantitatively explores whether land cover changes have a substantive impact on simulated streamflow within the tropical island setting of Puerto Rico. The Precipitation Runoff Modeling System (PRMS) was used to compare streamflow simulations based on five static parameterizations of land cover with those based on dynamically varying parameters derived from four land cover scenes for the period 1953-2012. The PRMS simulations based on static land cover illustrated consistent differences in simulated streamflow across the island. It was determined that the scale of the analysis makes a difference: large regions with localized areas that have undergone dramatic land cover change may show negligible difference in total streamflow, but streamflow simulations using dynamic land cover parameters for a highly altered subwatershed clearly demonstrate the effects of changing land cover on simulated streamflow. Incorporating dynamic parameterization in these highly altered watersheds can reduce the predictive uncertainty in simulations of streamflow using PRMS. Hydrologic models that do not consider the projected changes in land cover may be inadequate for water resource management planning for future conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1414351-how-we-computenmatters-estimates-mixing-stratified-flows','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1414351-how-we-computenmatters-estimates-mixing-stratified-flows"><span>How we compute N matters to estimates of mixing in stratified flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.</p> <p></p> <p>We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1414351-how-we-compute-nbsp-nbsp-matters-estimates-mixing-stratified-flows','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1414351-how-we-compute-nbsp-nbsp-matters-estimates-mixing-stratified-flows"><span>How we compute N matters to estimates of mixing in stratified flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...</p> <p>2017-10-13</p> <p>We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1817411S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1817411S"><span>Modeling the relative contributions of secondary ice formation processes to ice crystal number concentrations within mixed-phase clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sullivan, Sylvia; Hoose, Corinna; Nenes, Athanasios</p> <p>2016-04-01</p> <p>Measurements of in-cloud ice crystal number concentrations can be three or four orders of magnitude greater than the in-cloud ice nuclei number concentrations. This discrepancy can be explained by various secondary ice formation processes, which occur after initial ice nucleation, but the relative importance of these processes, and even the exact physics of each, is still unclear. A simple bin microphysics model (2IM) is constructed to investigate these knowledge gaps. 2IM extends the time-lag collision parameterization of Yano and Phillips, 2011 to include rime splintering, ice-ice aggregation, and droplet shattering and to incorporate the aspect ratio evolution as in Jensen and Harrington, 2015. The relative contribution of the secondary processes under various conditions are shown. In particular, temperature-dependent efficiencies are adjusted for ice-ice aggregation versus collision around -15°C, when rime splintering is no longer active, and the effect of aspect ratio on the process weighting is explored. The resulting simulations are intended to guide secondary ice formation parameterizations in larger-scale mixed-phase cloud schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900063522&hterms=moisture+condensation&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dmoisture%2Bcondensation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900063522&hterms=moisture+condensation&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dmoisture%2Bcondensation"><span>Development of a two-dimensional zonally averaged statistical-dynamical model. III - The parameterization of the eddy fluxes of heat and moisture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stone, Peter H.; Yao, Mao-Sung</p> <p>1990-01-01</p> <p>A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1295962-resolution-dependent-behavior-subgrid-scale-vertical-transport-zhang-mcfarlane-convection-parameterization','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1295962-resolution-dependent-behavior-subgrid-scale-vertical-transport-zhang-mcfarlane-convection-parameterization"><span>Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.</p> <p>2015-04-18</p> <p>With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23718005','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23718005"><span>[Natural succession of vegetation in Tiantong National Forest Park, Zhejiang Province of East China: a simulation study].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lü, Na; Ni, Jian</p> <p>2013-01-01</p> <p>By using spatially explicit landscape model (LANDIS 6.0 PRO), and parameterized this model with the long-term research and observation data of Tiantong National Station of Forest Eco-system Observation and Research, this paper simulated the natural succession of evergreen broad-leaved forest in Tiantong National Forest park, Zhejiang Province in the future 500 years, analyzed the spatial distribution and age structure of dominant species and major landscapes, and explored the succession pattern of the evergreen broad-leaved forest. In the park, the species alternation mostly occurred before the stage of evergreen broad-leaved forest. Pinus massoniana, Quercus fabric, and Liquidambar formosana occupied a large proportion during the early succession, but gradually disappeared with the succession process. Schima superba and Castanopsis fargesii took the main advantage in late succession, and developed to the climax community. Under the conditions without disturbances, the community was mainly composed of young forests in the early succession, and of mature or over-mature forests in the late succession, implying the insufficient regeneration ability of the community. LANDIS model could be used for simulating the landscape dynamics of evergreen broad-leaved forest in eastern China. In the future research, both the model structure and the model parameters should be improved, according to the complexity and diversity of subtropical evergreen broad-leaved forest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010018567','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010018567"><span>Parameterized Cross Sections for Pion Production in Proton-Proton Collisions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.</p> <p>2000-01-01</p> <p>An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1324448','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1324448"><span>Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Larson, Vincent; Gettelman, Andrew; Morrison, Hugh</p> <p></p> <p>In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21H2252W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21H2252W"><span>The effect of different spectral shape parameterizations of cloud droplet size distribution on first and second aerosol indirect effects in NACR CAM5 and evaluation with satellite data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, M.; Peng, Y.; Xie, X.; Liu, Y.</p> <p>2017-12-01</p> <p>Aerosol cloud interaction continues to constitute one of the most significant uncertainties for anthropogenic climate perturbations. The parameterization of cloud droplet size distribution and autoconversion process from large scale cloud to rain can influence the estimation of first and second aerosol indirect effects in global climate models. We design a series of experiments focusing on the microphysical cloud scheme of NCAR CAM5 (Community Atmospheric Model Version 5) in transient historical run with realistic sea surface temperature and sea ice. We investigate the effect of three empirical, two semi-empirical and one analytical expressions for droplet size distribution on cloud properties and explore the statistical relationships between aerosol optical thickness (AOT) and simulated cloud variables, including cloud top droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP). We also introduce the droplet spectral shape parameter into the autoconversion process to incorporate the effect of droplet size distribution on second aerosol indirect effect. Three satellite datasets (MODIS Terra/ MODIS Aqua/ AVHRR) are used to evaluate the simulated aerosol indirect effect from the model. Evident CDER decreasing with significant AOT increasing is found in the east coast of China to the North Pacific Ocean and the east coast of USA to the North Atlantic Ocean. Analytical and semi-empirical expressions for spectral shape parameterization show stronger first aerosol indirect effect but weaker second aerosol indirect effect than empirical expressions because of the narrower droplet size distribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70179439','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70179439"><span>Observed and simulated hydrologic response for a first-order catchment during extreme rainfall 3 years after wildfire disturbance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Ebel, Brian A.; Rengers, Francis K.; Tucker, Gregory E.</p> <p>2016-01-01</p> <p>Hydrologic response to extreme rainfall in disturbed landscapes is poorly understood because of the paucity of measurements. A unique opportunity presented itself when extreme rainfall in September 2013 fell on a headwater catchment (i.e., <1 ha) in Colorado, USA that had previously been burned by a wildfire in 2010. We compared measurements of soil-hydraulic properties, soil saturation from subsurface sensors, and estimated peak runoff during the extreme rainfall with numerical simulations of runoff generation and subsurface hydrologic response during this event. The simulations were used to explore differences in runoff generation between the wildfire-affected headwater catchment, a simulated unburned case, and for uniform versus spatially variable parameterizations of soil-hydraulic properties that affect infiltration and runoff generation in burned landscapes. Despite 3 years of elapsed time since the 2010 wildfire, observations and simulations pointed to substantial surface runoff generation in the wildfire-affected headwater catchment by the infiltration-excess mechanism while no surface runoff was generated in the unburned case. The surface runoff generation was the result of incomplete recovery of soil-hydraulic properties in the burned area, suggesting recovery takes longer than 3 years. Moreover, spatially variable soil-hydraulic property parameterizations produced longer duration but lower peak-flow infiltration-excess runoff, compared to uniform parameterization, which may have important hillslope sediment export and geomorphologic implications during long duration, extreme rainfall. The majority of the simulated surface runoff in the spatially variable cases came from connected near-channel contributing areas, which was a substantially smaller contributing area than the uniform simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H33B1663Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H33B1663Q"><span>Global Effects of SuperParameterization on Hydro-Thermal Land-Atmosphere Coupling on Multiple Timescales and an Amplification of the Bowen Ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qin, H.; Pritchard, M. S.; Kooperman, G. J.; Parishani, H.</p> <p>2017-12-01</p> <p>Conventional General Circulation Models (GCMs) in the Global Land-Atmosphere Coupling Experiment (GLACE) tend to produce overly strong Land-Atmosphere coupling (L-A coupling) strength. We investigate the effects of cloud SuperParameterization (SP) on L-A coupling on timescales longer than the diurnal where it has been previously shown to have a strong effect. Using the Community Atmosphere Model v3.5 (CAM3.5) and its SuperParameterized counterpart SPCAM3.5, we conducted experiments following the GLACE and Atmospheric Model Intercomparison Project (AMIP) protocols. On synoptic-to-subseasonal timescales, SP significantly mutes hydrologic L-A coupling on a global scale, through the atmospheric segment. But on longer seasonal timescales, SP does not exhibit detectable effects on hydrologic L-A coupling. Two regional effects of SP on thermal L-A coupling are also discovered and explored. Over the Arabian Peninsula, SP strikingly reduces thermal L-A coupling due to a control by mean regional rainfall reduction. Over the Southwestern US and Northern Mexico, however, SP remarkably enhances the thermal L-A coupling independent of rainfall or soil moisture. We argue that the cause may be a previously unrecognized effect of SP to amplify the simulated Bowen ratio. Not only does this help reconcile a puzzling local enhancement of thermal L-A coupling over the Southwestern US, but it is also demonstrated to be a robust, global effect of SP over land that is independent of model version and experiment design, and that has important consequences for climate change prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT.........9J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT.........9J"><span>Examining Chaotic Convection with Super-Parameterization Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jones, Todd R.</p> <p></p> <p>This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080029368','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080029368"><span>Sulfur in Earth's Mantle and Its Behavior During Core Formation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chabot, Nancy L.; Righter,Kevin</p> <p>2006-01-01</p> <p>The density of Earth's outer core requires that about 5-10% of the outer core be composed of elements lighter than Fe-Ni; proposed choices for the "light element" component of Earth's core include H, C, O, Si, S, and combinations of these elements [e.g. 1]. Though samples of Earth's core are not available, mantle samples contain elemental signatures left behind from the formation of Earth's core. The abundances of siderophile (metal-loving) elements in Earth's mantle have been used to gain insight into the early accretion and differentiation history of Earth, the process by which the core and mantle formed, and the composition of the core [e.g. 2-4]. Similarly, the abundance of potential light elements in Earth's mantle could also provide constraints on Earth's evolution and core composition. The S abundance in Earth's mantle is 250 ( 50) ppm [5]. It has been suggested that 250 ppm S is too high to be due to equilibrium core formation in a high pressure, high temperature magma ocean on early Earth and that the addition of S to the mantle from the subsequent accretion of a late veneer is consequently required [6]. However, this earlier work of Li and Agee [6] did not parameterize the metalsilicate partitioning behavior of S as a function of thermodynamic variables, limiting the different pressure and temperature conditions during core formation that could be explored. Here, the question of explaining the mantle abundance of S is revisited, through parameterizing existing metal-silicate partitioning data for S and applying the parameterization to core formation in Earth.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010WRR....4610517K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010WRR....4610517K"><span>Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun</p> <p>2010-10-01</p> <p>Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1342521-radiative-flux-forcing-parameterization-error-aerosol-free-clear-skies','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1342521-radiative-flux-forcing-parameterization-error-aerosol-free-clear-skies"><span>Radiative flux and forcing parameterization error in aerosol-free clear skies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...</p> <p>2015-07-03</p> <p>This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24976795','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24976795"><span>[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takacs, T; Jüttler, B</p> <p>2012-11-01</p> <p>Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAMES..10..403B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAMES..10..403B"><span>Parameterization Interactions in Global Aquaplanet Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.</p> <p>2018-02-01</p> <p>Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/10184408','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/10184408"><span>On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.</p> <p>1993-10-01</p> <p>One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3197830','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3197830"><span>Brain Surface Conformal Parameterization Using Riemann Surface Structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung</p> <p>2011-01-01</p> <p>In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=320350','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=320350"><span>Impact of Apex Model parameterization strategy on estimated benefit of conservation practices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/841522','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/841522"><span>Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Somerville, R.C.J.; Iacobellis, S.F.</p> <p>2005-03-18</p> <p>Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.6860O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.6860O"><span>Tropospheric Distribution of Trace Species during the Oxidation Mechanism Observations (OMO-2015) campaign: Model Evaluation and sensitivity simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ojha, Narendra; Pozzer, Andrea; Jöckel, Patrick; Fischer, Horst; Zahn, Andreas; Tomsche, Laura; Lelieveld, Jos</p> <p>2017-04-01</p> <p>The Asian monsoon convection redistributes trace species, affecting the tropospheric chemistry and radiation budget over Asia and downwind as far as the Mediterranean. It remains challenging to model these impacts due to uncertainties, e.g. associated with the convection parameterization and input emissions. Here, we perform a series of numerical experiments using the global ECHAM5/MESSy atmospheric chemistry model (EMAC) to investigate the tropospheric distribution of O3 and related tracers measured during the Oxidation Mechanism Observations (OMO) conducted during July-August 2015. The reference simulation can reproduce the spatio-temporal variations to some extent (e.g. r2 = 0.7 for O3, 0.6 for CO). However, this simulation underestimates mean CO in the lower troposphere by about 30 ppbv and overestimates mean O3 up to 35 ppbv, especially in the middle-upper troposphere. Interestingly, sensitivity simulations with 50% higher biofuel emissions of CO over South Asia had insignificant effect on CO underestimation, pointing to sources upwind of South Asia. Use of an alternative convection parameterization is found to significantly improve simulated O3. The study reveals the abilities as well as the limitations of the model to reproduce observations and study atmospheric chemistry and climate implications of the monsoon.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70169297','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70169297"><span>The role of competition – colonization tradeoffs and spatial heterogeneity in promoting trematode coexistence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Mordecai, Erin A.; Jaramillo, Alejandra G.; Ashford, Jacob E.; Hechinger, Ryan F.; Lafferty, Kevin D.</p> <p>2016-01-01</p> <p>Competition – colonization tradeoffs occur in many systems, and theory predicts that they can strongly promote species coexistence. However, there is little empirical evidence that observed competition – colonization tradeoffs are strong enough to maintain diversity in natural systems. This is due in part to a mismatch between theoretical assumptions and biological reality in some systems. We tested whether a competition – colonization tradeoff explains how a diverse trematode guild coexists in California horn snail populations, a system that meets the requisite criteria for the tradeoff to promote coexistence. A field experiment showed that subordinate trematode species tended to have higher colonization rates than dominant species. This tradeoff promoted coexistence in parameterized models but did not fully explain trematode diversity and abundance, suggesting a role of additional diversity maintenance mechanisms. Spatial heterogeneity is an alternative way to promote coexistence if it isolates competing species. We used scale transition theory to expand the competition – colonization tradeoff model to include spatial variation. The parameterized model showed that spatial variation in trematode prevalence did not isolate most species sufficiently to explain the overall high diversity, but could benefit some rare species. Together, the results suggest that several mechanisms combine to maintain diversity, even when a competition – colonization tradeoff occurs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFDD17007C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFDD17007C"><span>Control strategies for wind farm power optimization: LES study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ciri, Umberto; Rotea, Mario; Leonardi, Stefano</p> <p>2017-11-01</p> <p>Turbines in wind farms operate in off-design conditions as wake interactions occur for particular wind directions. Advanced wind farm control strategies aim at coordinating and adjusting turbine operations to mitigate power losses in such conditions. Coordination is achieved by controlling on upstream turbines either the wake intensity, through the blade pitch angle or the generator torque, or the wake direction, through yaw misalignment. Downstream turbines can be adapted to work in waked conditions and limit power losses, using the blade pitch angle or the generator torque. As wind conditions in wind farm operations may change significantly, it is difficult to determine and parameterize the variations of the coordinated optimal settings. An alternative is model-free control and optimization of wind farms, which does not require any parameterization and can track the optimal settings as conditions vary. In this work, we employ a model-free optimization algorithm, extremum-seeking control, to find the optimal set-points of generator torque, blade pitch and yaw angle for a three-turbine configuration. Large-Eddy Simulations are used to provide a virtual environment to evaluate the performance of the control strategies under realistic, unsteady incoming wind. This work was supported by the National Science Foundation, Grants No. 1243482 (the WINDINSPIRE project) and IIP 1362033 (I/UCRC WindSTAR). TACC is acknowledged for providing computational time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/15525','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/15525"><span>Alternative approaches to providing passenger transportation in low density cities : the case of Council Bluffs, Iowa</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1997-09-01</p> <p>This study explores transportation alternatives for Council Bluffs. The intention is to explore alternatives that may potentially prove viable for other small cities faced with scattered development patterns and limited population density. The study ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=285578','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=285578"><span>Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030002525','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030002525"><span>Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.</p> <p>2002-01-01</p> <p>Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOSPO24B2956R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOSPO24B2956R"><span>Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reckinger, S. J.; Fox-Kemper, B.</p> <p>2016-02-01</p> <p>The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110023517','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110023517"><span>Controllers, observers, and applications thereof</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)</p> <p>2011-01-01</p> <p>Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3716427','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3716427"><span>Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav</p> <p>2007-01-01</p> <p>The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy..tmp...76B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy..tmp...76B"><span>Spectral cumulus parameterization based on cloud-resolving model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baba, Yuya</p> <p>2018-02-01</p> <p>We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017HESS...21.3975W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017HESS...21.3975W"><span>The importance of parameterization when simulating the hydrologic response of vegetative land-cover change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John</p> <p>2017-08-01</p> <p>Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as <q>behavioral</q> in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70191664','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70191664"><span>The importance of parameterization when simulating the hydrologic response of vegetative land-cover change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John</p> <p>2017-01-01</p> <p>Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/948450','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/948450"><span>FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Richard C. J. Somerville</p> <p>2009-02-27</p> <p>Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HESS...22.1193M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HESS...22.1193M"><span>Parametric soil water retention models: a critical evaluation of expressions for the full moisture range</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane</p> <p>2018-02-01</p> <p>Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1817207G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1817207G"><span>How certain are the process parameterizations in our models?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard</p> <p>2016-04-01</p> <p>Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A53K..07W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A53K..07W"><span>Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.</p> <p>2016-12-01</p> <p>Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRD..123.1269M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRD..123.1269M"><span>New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna</p> <p>2018-01-01</p> <p>We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19968441','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19968441"><span>Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D</p> <p>2009-12-01</p> <p>The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMNS31B..02G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMNS31B..02G"><span>Quantifying the Uncertainties and Multi-parameter Trade-offs in Joint Inversion of Receiver Functions and Surface Wave Velocity and Ellipticity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, C.; Lekic, V.</p> <p>2016-12-01</p> <p>When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15083673','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15083673"><span>A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Midgley, S M</p> <p>2004-01-21</p> <p>A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140016922','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140016922"><span>Alternate Approaches to Exploration: The Single Crew Module Concept</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chambliss, Joe</p> <p>2011-01-01</p> <p>The Cx Program envisioned exploration of the moon and mars using an extrapolation of the Apollo approach. If new technology development initiatives are successful, they will provide capabilities that can enable alternate approaches. This presentation will provide a brief overview of the Cx approaches for lunar and Mars missions and some of the alternatives that were considered. Then an alternative approach referred to as a Single Crew Module approach is described. The SCM concept employs new technologies in a way that could reduce exploration cost and possibly schedule. Options to the approaches will be presented and discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMOS51B1311F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMOS51B1311F"><span>A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Filipot, J.</p> <p>2010-12-01</p> <p>A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940026128','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940026128"><span>Methods of testing parameterizations: Vertical ocean mixing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tziperman, Eli</p> <p>1992-01-01</p> <p>The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140010553','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140010553"><span>Comparison of Gravity Wave Temperature Variances from Ray-Based Spectral Parameterization of Convective Gravity Wave Drag with AIRS Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.</p> <p>2012-01-01</p> <p>The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010037596','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010037596"><span>The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)</p> <p>2000-01-01</p> <p>The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/8353','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/8353"><span>The application of depletion curves for parameterization of subgrid variability of snow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>C. H. Luce; D. G. Tarboton</p> <p>2004-01-01</p> <p>Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010naco.book..268O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010naco.book..268O"><span>Evaluation of Generation Alternation Models in Evolutionary Robotics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro</p> <p></p> <p>For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title42-vol4/pdf/CFR-2010-title42-vol4-sec456-371.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title42-vol4/pdf/CFR-2010-title42-vol4-sec456-371.pdf"><span>42 CFR 456.371 - Exploration of alternative services.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-10-01</p> <p>... 42 Public Health 4 2010-10-01 2010-10-01 false Exploration of alternative services. 456.371 Section 456.371 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Facilities Medical, Psychological, and Social Evaluations and Admission Review § 456.371 Exploration of...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140013050','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140013050"><span>A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.</p> <p>2014-01-01</p> <p>A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1915945H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1915945H"><span>Domain-averaged snow depth over complex terrain from flat field measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Helbig, Nora; van Herwijnen, Alec</p> <p>2017-04-01</p> <p>Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910034029&hterms=geophysique&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dgeophysique','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910034029&hterms=geophysique&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dgeophysique"><span>Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean</p> <p>1990-01-01</p> <p>A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010038024','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010038024"><span>Predictive Compensator Optimization for Head Tracking Lag in Virtual Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Adelstein, Barnard D.; Jung, Jae Y.; Ellis, Stephen R.</p> <p>2001-01-01</p> <p>We examined the perceptual impact of plant noise parameterization for Kalman Filter predictive compensation of time delays intrinsic to head tracked virtual environments (VEs). Subjects were tested in their ability to discriminate between the VE system's minimum latency and conditions in which artificially added latency was then predictively compensated back to the system minimum. Two head tracking predictors were parameterized off-line according to cost functions that minimized prediction errors in (1) rotation, and (2) rotation projected into translational displacement with emphasis on higher frequency human operator noise. These predictors were compared with a parameterization obtained from the VE literature for cost function (1). Results from 12 subjects showed that both parameterization type and amount of compensated latency affected discrimination. Analysis of the head motion used in the parameterizations and the subsequent discriminability results suggest that higher frequency predictor artifacts are contributory cues for discriminating the presence of predictive compensation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.A23Q..04A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.A23Q..04A"><span>Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.</p> <p>2014-12-01</p> <p>Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150002890','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150002890"><span>Kinematic and Microphysical Control of Lightning Flash Rate over Northern Alabama</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carey, Lawrence D.; Bain, Anthony L.; Matthee, Retha; Schultz, Christopher J.; Schultz, Elise V.; Deierling, Wiebke; Petersen, Walter A.</p> <p>2015-01-01</p> <p>The Deep Convective Clouds and Chemistry (DC3) experiment seeks to examine the relationship between deep convection and the production of nitrogen oxides (NO (sub x)) via lightning (LNO (sub x)). A critical step in estimating LNO (sub x) production in a cloud-resolving model (CRM) without explicit lightning is to estimate the flash rate from available model parameters that are statistically and physically correlated. As such, the objective of this study is to develop, improve and evaluate lightning flash rate parameterizations in a variety of meteorological environments and storm types using radar and lightning mapping array (LMA) observations taken over Northern Alabama from 2005-2012, including during DC3. UAH's Advanced Radar for Meteorological and Operational Research (ARMOR) and the Weather Surveillance Radar - 1988 Doppler (WSR 88D) located at Hytop (KHTX) comprises the dual-Doppler and polarimetric radar network, which has been in operation since 2004. The northern Alabama LMA (NA LMA) in conjunction with Vaisala's National Lightning Detection Network (NLDN) allow for a detailed depiction of total lightning during this period. This study will integrate ARMOR-KHTX dual Doppler/polarimetric radar and NA LMA lightning observations from past and ongoing studies, including the more recent DC3 results, over northern Alabama to form a large data set of 15-20 case days and over 20 individual storms, including both ordinary multicell and supercell convection. Several flash rate parameterizations will be developed and tested, including those based on 1) graupel/small hail volume; 2) graupel/small hail mass, and 3) convective updraft volume. Sensitivity of the flash rate parameterizations to storm intensity, storm morphology and environmental conditions will be explored.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..11717108D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..11717108D"><span>Exploring a new method for the retrieval of urban thermophysical properties using thermal infrared remote sensing and deterministic modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>De Ridder, K.; Bertrand, C.; Casanova, G.; Lefebvre, W.</p> <p>2012-09-01</p> <p>Increasingly, mesoscale meteorological and climate models are used to predict urban weather and climate. Yet, large uncertainties remain regarding values of some urban surface properties. In particular, information concerning urban values for thermal roughness length and thermal admittance is scarce. In this paper, we present a method to estimate values for thermal admittance in combination with an optimal scheme for thermal roughness length, based on METEOSAT-8/SEVIRI thermal infrared imagery in conjunction with a deterministic atmospheric model containing a simple urbanized land surface scheme. Given the spatial resolution of the SEVIRI sensor, the resulting parameter values are applicable at scales of the order of 5 km. As a study case we focused on the city of Paris, for the day of 29 June 2006. Land surface temperature was calculated from SEVIRI thermal radiances using a new split-window algorithm specifically designed to handle urban conditions, as described inAppendix A, including a correction for anisotropy effects. Land surface temperature was also calculated in an ensemble of simulations carried out with the ARPS mesoscale atmospheric model, combining different thermal roughness length parameterizations with a range of thermal admittance values. Particular care was taken to spatially match the simulated land surface temperature with the SEVIRI field of view, using the so-called point spread function of the latter. Using Bayesian inference, the best agreement between simulated and observed land surface temperature was obtained for the Zilitinkevich (1970) and Brutsaert (1975) thermal roughness length parameterizations, the latter with the coefficients obtained by Kanda et al. (2007). The retrieved thermal admittance values associated with either thermal roughness parameterization were, respectively, 1843 ± 108 J m-2 s-1/2 K-1 and 1926 ± 115 J m-2 s-1/2 K-1.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A54D..08Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A54D..08Q"><span>Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.</p> <p>2016-12-01</p> <p>This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Nutrition&id=EJ1082955','ERIC'); return false;" href="https://eric.ed.gov/?q=Nutrition&id=EJ1082955"><span>What Is the Difference between a Calorie and a Carbohydrate?--Exploring Nutrition Education Opportunities in Alternative School Settings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Norquest, Michele; Phelps, Josh; Hermann, Janice; Kennedy, Tay</p> <p>2015-01-01</p> <p>Extension-based nutrition educators have indicated current curricula do not engage alternative school students' interests. The study reported here explored nutrition education opportunities at alternative schools in Oklahoma. Data collection involved focus groups gathering student perspectives regarding preferred teaching and learning styles, and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=inclusion+AND+social&pg=2&id=EJ1100079','ERIC'); return false;" href="https://eric.ed.gov/?q=inclusion+AND+social&pg=2&id=EJ1100079"><span>Exploring Dimensions of Social Inclusion among Alternative Learning Centres in the USA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Henderson, Dawn X.; Barnes, Rachelle Redmond</p> <p>2016-01-01</p> <p>Increasing disparities in out-of-school suspension and dropout rates have led a number of school districts to develop alternative models of education to include alternative learning centres (ALCs). Using an exploratory mixed methods design, this study explores dimensions of social inclusion among ALCs, located in the southeastern region of the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT........23E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT........23E"><span>Exploring Model Error through Post-processing and an Ensemble Kalman Filter on Fire Weather Days</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Erickson, Michael J.</p> <p></p> <p>The proliferation of coupling atmospheric ensemble data to models in other related fields requires a priori knowledge of atmospheric ensemble biases specific to the desired application. In that spirit, this dissertation focuses on elucidating atmospheric ensemble model bias and error through a variety of different methods specific to fire weather days (FWDs) over the Northeast United States (NEUS). Other than a handful of studies that use models to predict fire indices for single fire seasons (Molders 2008, Simpson et al. 2014), an extensive exploration of model performance specific to FWDs has not been attempted. Two unique definitions for FWDs are proposed; one that uses pre-existing fire indices (FWD1) and another from a new statistical fire weather index (FWD2) relating fire occurrence and near-surface meteorological observations. Ensemble model verification reveals FWDs to have warmer (> 1 K), moister (~ 0.4 g kg-1) and less windy (~ 1 m s-1) biases than the climatological average for both FWD1 and FWD2. These biases are not restricted to the near surface but exist through the entirety of the planetary boundary layer (PBL). Furthermore, post-processing methods are more effective when previous FWDs are incorporated into the statistical training, suggesting that model bias could be related to the synoptic flow pattern. An Ensemble Kalman Filter (EnKF) is used to explore the effectiveness of data assimilation during a period of extensive FWDs in April 2012. Model biases develop rapidly on FWDs, consistent with the FWD1 and FWD2 verification. However, the EnKF is effective at removing most biases for temperature, wind speed and specific humidity. Potential sources of error in the parameterized physics of the PBL are explored by rerunning the EnKF with simultaneous state and parameter estimation (SSPE) for two relevant parameters within the ACM2 PBL scheme. SSPE helps to reduce the cool temperature bias near the surface on FWDs, with the variability in parameter estimates exhibiting some relationship to model bias for temperature. This suggests the potential for structural model error within the ACM2 PBL scheme and could lead toward the future development of improved PBL parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920020759','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920020759"><span>Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Norbury, John W.; Townsend, Lawrence W.</p> <p>1992-01-01</p> <p>Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110022999','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110022999"><span>Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.</p> <p>2011-01-01</p> <p>The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980001208','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980001208"><span>Observational and Modeling Studies of Clouds and the Hydrological Cycle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Somerville, Richard C. J.</p> <p>1997-01-01</p> <p>Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG23A..05V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG23A..05V"><span>Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.</p> <p>2017-12-01</p> <p>Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdAtS..35..713Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdAtS..35..713Z"><span>Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia</p> <p>2018-06-01</p> <p>Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860018328&hterms=flu&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dflu','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860018328&hterms=flu&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dflu"><span>The zonally averaged transport characteristics of the atmosphere as determined by a general circulation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Plumb, R. A.</p> <p>1985-01-01</p> <p>Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AMT....10.3041S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AMT....10.3041S"><span>A statistical comparison of cirrus particle size distributions measured using the 2-D stereo probe during the TC4, SPARTICUS, and MACPEX flight campaigns with historical cirrus datasets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schwartz, M. Christian</p> <p>2017-08-01</p> <p>This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the comparison between the two is nevertheless more favorable. It is concluded that the older datasets continue to be useful for scientific purposes, with certain caveats, and that continuing field investigations of cirrus with more modern probes is desirable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GMD....10.3861K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GMD....10.3861K"><span>Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khan, Tanvir R.; Perlinger, Judith A.</p> <p>2017-10-01</p> <p>Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1407281-exasat-exascale-co-design-tool-performance-modeling','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1407281-exasat-exascale-co-design-tool-performance-modeling"><span>ExaSAT: An exascale co-design tool for performance modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Unat, Didem; Chan, Cy; Zhang, Weiqun; ...</p> <p>2015-02-09</p> <p>One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15963690','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15963690"><span>Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ferentinos, Konstantinos P</p> <p>2005-09-01</p> <p>Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAMES..10..989L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAMES..10..989L"><span>Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei</p> <p>2018-04-01</p> <p>Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A43B0199P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A43B0199P"><span>Ultra-Parameterized CAM: Progress Towards Low-Cloud Permitting Superparameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Khairoutdinov, M.; Wyant, M. C.; Singh, B.</p> <p>2016-12-01</p> <p>A leading source of uncertainty in climate feedback arises from the representation of low clouds, which are not resolved but depend on small-scale physical processes (e.g. entrainment, boundary layer turbulence) that are heavily parameterized. We show results from recent attempts to achieve an explicit representation of low clouds by pushing the computational limits of cloud superparameterization to resolve boundary-layer eddy scales relevant to marine stratocumulus (250m horizontal and 20m vertical length scales). This extreme configuration is called "ultraparameterization". Effects of varying horizontal vs. vertical resolution are analyzed in the context of altered constraints on the turbulent kinetic energy statistics of the marine boundary layer. We show that 250m embedded horizontal resolution leads to a more realistic boundary layer vertical structure, but also to an unrealistic cloud pulsation that cannibalizes time mean LWP. We explore the hypothesis that feedbacks involving horizontal advection (not typically encountered in offline LES that neglect this degree of freedom) may conspire to produce such effects and present strategies to compensate. The results are relevant to understanding the emergent behavior of quasi-resolved low cloud decks in a multi-scale modeling framework within a previously unencountered grey zone of better resolved boundary-layer turbulence.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..117.3112H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..117.3112H"><span>Shortwave radiation parameterization scheme for subgrid topography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Helbig, N.; LöWe, H.</p> <p>2012-02-01</p> <p>Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AtmRe.204...21Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AtmRe.204...21Q"><span>Analysis of sensitivity to different parameterization schemes for a subtropical cyclone</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.</p> <p>2018-05-01</p> <p>A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28242113','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28242113"><span>Evaluating the impacts of agricultural land management practices on water resources: A probabilistic hydrologic modeling approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prada, A F; Chu, M L; Guzman, J A; Moriasi, D N</p> <p>2017-05-15</p> <p>Evaluating the effectiveness of agricultural land management practices in minimizing environmental impacts using models is challenged by the presence of inherent uncertainties during the model development stage. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the applicability and robustness of the model to properly represent future or alternative scenarios. The objective of this study was to develop a framework that facilitates model parameter selection while evaluating uncertainty to assess the impacts of land management practices at the watershed scale. The model framework was applied to the Lake Creek watershed located in southwestern Oklahoma, USA. A two-step probabilistic approach was implemented to parameterize the Agricultural Policy/Environmental eXtender (APEX) model using global uncertainty and sensitivity analysis to estimate the full spectrum of total monthly water yield (WYLD) and total monthly Nitrogen loads (N) in the watershed under different land management practices. Twenty-seven models were found to represent the baseline scenario in which uncertainty of up to 29% and 400% in WYLD and N, respectively, is plausible. Changing the land cover to pasture manifested the highest decrease in N to up to 30% for a full pasture coverage while changing to full winter wheat cover can increase the N up to 11%. The methodology developed in this study was able to quantify the full spectrum of system responses, the uncertainty associated with them, and the most important parameters that drive their variability. Results from this study can be used to develop strategic decisions on the risks and tradeoffs associated with different management alternatives that aim to increase productivity while also minimizing their environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011HESS...15.1403M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011HESS...15.1403M"><span>Estimating surface fluxes over middle and upper streams of the Heihe River Basin with ASTER imagery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, W.; Ma, Y.; Hu, Z.; Su, Z.; Wang, J.; Ishikawa, H.</p> <p>2011-05-01</p> <p>Land surface heat fluxes are essential measures of the strengths of land-atmosphere interactions involving energy, heat and water. Correct parameterization of these fluxes in climate models is critical. Despite their importance, state-of-the-art observation techniques cannot provide representative areal averages of these fluxes comparable to the model grid. Alternative methods of estimation are thus required. These alternative approaches use (satellite) observables of the land surface conditions. In this study, the Surface Energy Balance System (SEBS) algorithm was evaluated in a cold and arid environment, using land surface parameters derived from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data. Field observations and estimates from SEBS were compared in terms of net radiation flux (Rn), soil heat flux (G0), sensible heat flux (H) and latent heat flux (λE) over a heterogeneous land surface. As a case study, this methodology was applied to the experimental area of the Watershed Allied Telemetry Experimental Research (WATER) project, located on the mid-to-upstream sections of the Heihe River in northwest China. ASTER data acquired between 3 May and 4 June 2008, under clear-sky conditions were used to determine the surface fluxes. Ground-based measurements of land surface heat fluxes were compared with values derived from the ASTER data. The results show that the derived surface variables and the land surface heat fluxes furnished by SEBS in different months over the study area are in good agreement with the observed land surface status under the limited cases (some cases looks poor results). So SEBS can be used to estimate turbulent heat fluxes with acceptable accuracy in areas where there is partial vegetation cover in exceptive conditions. It is very important to perform calculations using ground-based observational data for parameterization in SEBS in the future. Nevertheless, the remote-sensing results can provide improved explanations of land surface fluxes over varying land coverage at greater spatial scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=usefullness&id=ED191637','ERIC'); return false;" href="https://eric.ed.gov/?q=usefullness&id=ED191637"><span>An Exploration of Inter-District Sharing Alternatives for Belle Plaine and HLV. Final Report.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Olsen, Scott A.</p> <p></p> <p>In an effort to address the costs incurred by declining enrollments while still maintaining quality education, 2 small Iowa school districts explored 84 sharing alternatives. The utility (fitness/usefullness) of each alternative was rated by the school board, school staff, and a community sample from each district via a specially constructed…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.C41B1210J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.C41B1210J"><span>Sensitivity of Glacier Mass Balance Estimates to the Selection of WRF Cloud Microphysics Parameterization in the Indus River Watershed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.</p> <p>2017-12-01</p> <p>Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMIN51A1780M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMIN51A1780M"><span>Using Machine learning method to estimate Air Temperature from MODIS over Berlin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marzban, F.; Preusker, R.; Sodoudi, S.; Taheri, H.; Allahbakhshi, M.</p> <p>2015-12-01</p> <p>Land Surface Temperature (LST) is defined as the temperature of the interface between the Earth's surface and its atmosphere and thus it is a critical variable to understand land-atmosphere interactions and a key parameter in meteorological and hydrological studies, which is involved in energy fluxes. Air temperature (Tair) is one of the most important input variables in different spatially distributed hydrological, ecological models. The estimation of near surface air temperature is useful for a wide range of applications. Some applications from traffic or energy management, require Tair data in high spatial and temporal resolution at two meters height above the ground (T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (MODIS). Tair is commonly obtained from synoptic measurements in weather stations. However, the derivation of near surface air temperature from the LST derived from satellite is far from straight forward. T2m is not driven directly by the sun, but indirectly by LST, thus T2m can be parameterized from the LST and other variables such as Albedo, NDVI, Water vapor and etc. Most of the previous studies have focused on estimating T2m based on simple and advanced statistical approaches, Temperature-Vegetation index and energy-balance approaches but the main objective of this research is to explore the relationships between T2m and LST in Berlin by using Artificial intelligence method with the aim of studying key variables to allow us establishing suitable techniques to obtain Tair from satellite Products and ground data. Secondly, an attempt was explored to identify an individual mix of attributes that reveals a particular pattern to better understanding variation of T2m during day and nighttime over the different area of Berlin. For this reason, a three layer Feedforward neural networks is considered with LMA algorithm. Considering the different relationships between T2m and LST for different land types enable us to improve better parameterization for determination of the best non-linear relation between LST and T2m over Berlin during day and nighttime. The results of the study will be presented and discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRC..122.5905M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRC..122.5905M"><span>Testing a common ice-ocean parameterization with laboratory experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McConnochie, C. D.; Kerr, R. C.</p> <p>2017-07-01</p> <p>Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19252624','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19252624"><span>Parameterization of light absorption by components of seawater in optically complex coastal waters of the Crimea Peninsula (Black Sea).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K</p> <p>2009-03-01</p> <p>The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830002202','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830002202"><span>A second-order Budkyo-type parameterization of landsurface hydrology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Andreou, S. A.; Eagleson, P. S.</p> <p>1982-01-01</p> <p>A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H53H1803O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H53H1803O"><span>Large ensemble and large-domain hydrologic modeling: Insights from SUMMA applications in the Columbia River Basin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ou, G.; Nijssen, B.; Nearing, G. S.; Newman, A. J.; Mizukami, N.; Clark, M. P.</p> <p>2016-12-01</p> <p>The Structure for Unifying Multiple Modeling Alternatives (SUMMA) provides a unifying modeling framework for process-based hydrologic modeling by defining a general set of conservation equations for mass and energy, with the capability to incorporate multiple choices for spatial discretizations and flux parameterizations. In this study, we provide a first demonstration of large-scale hydrologic simulations using SUMMA through an application to the Columbia River Basin (CRB) in the northwestern United States and Canada for a multi-decadal simulation period. The CRB is discretized into 11,723 hydrologic response units (HRUs) according to the United States Geologic Service Geospatial Fabric. The soil parameters are derived from the Natural Resources Conservation Service Soil Survey Geographic (SSURGO) Database. The land cover parameters are based on the National Land Cover Database from the year 2001 created by the Multi-Resolution Land Characteristics (MRLC) Consortium. The forcing data, including hourly air pressure, temperature, specific humidity, wind speed, precipitation, shortwave and longwave radiations, are based on Phase 2 of the North American Land Data Assimilation System (NLDAS-2) and averaged for each HRU. The simulation results are compared to simulations with the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS). We are particularly interested in SUMMA's capability to mimic model behaviors of the other two models through the selection of appropriate model parameterizations in SUMMA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22194308','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22194308"><span>A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rivas, Elena; Lang, Raymond; Eddy, Sean R</p> <p>2012-02-01</p> <p>The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3264907','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3264907"><span>A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rivas, Elena; Lang, Raymond; Eddy, Sean R.</p> <p>2012-01-01</p> <p>The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H31C1392C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H31C1392C"><span>Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.</p> <p>2016-12-01</p> <p>Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20080047959&hterms=ev&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dev','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20080047959&hterms=ev&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dev"><span>Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang</p> <p>2008-01-01</p> <p>Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20020081016&hterms=cloud+computing&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dcloud%2Bcomputing','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20020081016&hterms=cloud+computing&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dcloud%2Bcomputing"><span>Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)</p> <p>2002-01-01</p> <p>Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..DFDG30007R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..DFDG30007R"><span>Anisotropic shear dispersion parameterization for ocean eddy transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reckinger, Scott; Fox-Kemper, Baylor</p> <p>2015-11-01</p> <p>The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRC..117.0J08F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRC..117.0J08F"><span>A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Filipot, J.-F.; Ardhuin, F.</p> <p>2012-11-01</p> <p>A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950023916','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950023916"><span>Parameterized spectral distributions for meson production in proton-proton collisions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.</p> <p>1995-01-01</p> <p>Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1045892','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1045892"><span>Implementation of a Parameterization Framework for Cybersecurity Laboratories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2017-03-01</p> <p>designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of laboratory exercises. A...is to provide the designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of...support might assist the designer of laboratory exercises to achieve the following? 1. Verify that students performed lab exercises, with some</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NIMPB.413...75W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NIMPB.413...75W"><span>Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.</p> <p>2017-12-01</p> <p>Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.4637G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.4637G"><span>How to assess the impact of a physical parameterization in simulations of moist convection?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grabowski, Wojciech</p> <p>2017-04-01</p> <p>A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=correlation+AND+coefficient&pg=3&id=EJ1174924','ERIC'); return false;" href="https://eric.ed.gov/?q=correlation+AND+coefficient&pg=3&id=EJ1174924"><span>An Exploration of Alternative Scoring Methods Using Curriculum-Based Measurement in Early Writing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Allen, Abigail A.; Poch, Apryl L.; Lembke, Erica S.</p> <p>2018-01-01</p> <p>This manuscript describes two empirical studies of alternative scoring procedures used with curriculum-based measurement in writing (CBM-W). Study 1 explored the technical adequacy of a trait-based rubric in first grade. Study 2 explored the technical adequacy of a trait-based rubric, production-dependent, and production-independent scores in…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120015716','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120015716"><span>Sensitivity of Cirrus and Mixed-phase Clouds to the Ice Nuclei Spectra in McRAS-AC: Single Column Model Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.</p> <p>2012-01-01</p> <p>The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MAP...tmp..298L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MAP...tmp..298L"><span>Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno</p> <p>2018-06-01</p> <p>A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016DSRI..112...79P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016DSRI..112...79P"><span>Finescale parameterizations of energy dissipation in a region of strong internal tides and sheared flow, the Lucky-Strike segment of the Mid-Atlantic Ridge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.</p> <p>2016-06-01</p> <p>The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20180000551&hterms=Scheme&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DScheme','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20180000551&hterms=Scheme&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DScheme"><span>Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20180000551'); toggleEditAbsImage('author_20180000551_show'); toggleEditAbsImage('author_20180000551_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20180000551_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20180000551_hide"></p> <p>2017-01-01</p> <p>This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFM.A41D0063M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFM.A41D0063M"><span>Running GCM physics and dynamics on different grids: Algorithm and tests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Molod, A.</p> <p>2006-12-01</p> <p>The major drawback in the use of sigma coordinates in atmospheric GCMs, namely the error in the pressure gradient term near sloping terrain, leaves the use of eta coordinates an important alternative. A central disadvantage of an eta coordinate, the inability to retain fine resolution in the vertical as the surface rises above sea level, is addressed here. An `alternate grid' technique is presented which allows the tendencies of state variables due to the physical parameterizations to be computed on a vertical grid (the `physics grid') which retains fine resolution near the surface, while the remaining terms in the equations of motion are computed using an eta coordinate (the `dynamics grid') with coarser vertical resolution. As a simple test of the technique a set of perpetual equinox experiments using a simplified lower boundary condition with no land and no topography were performed. The results show that for both low and high resolution alternate grid experiments, much of the benefit of increased vertical resolution for the near surface meridional wind (and mass streamfield) can be realized by enhancing the vertical resolution of the `physics grid' in the manner described here. In addition, approximately half of the increase in zonal jet strength seen with increased vertical resolution can be realized using the `alternate grid' technique. A pair of full GCM experiments with realistic lower boundary conditions and topography were also performed. It is concluded that the use of the `alternate grid' approach offers a promising way forward to alleviate a central problem associated with the use of the eta coordinate in atmospheric GCMs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJC...78..213L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJC...78..213L"><span>f( R) gravity modifications: from the action to the data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lazkoz, Ruth; Ortiz-Baños, María; Salzano, Vincenzo</p> <p>2018-03-01</p> <p>It is a very well established matter nowadays that many modified gravity models can offer a sound alternative to General Relativity for the description of the accelerated expansion of the universe. But it is also equally well known that no clear and sharp discrimination between any alternative theory and the classical one has been found so far. In this work, we attempt at formulating a different approach starting from the general class of f( R) theories as test probes: we try to reformulate f( R) Lagrangian terms as explicit functions of the redshift, i.e., as f( z). In this context, the f( R) setting to the consensus cosmological model, the Λ CDM model, can be written as a polynomial including just a constant and a third-order term. Starting from this result, we propose various different polynomial parameterizations f( z), including new terms which would allow for deviations from Λ CDM, and we thoroughly compare them with observational data. While on the one hand we have found no statistically preference for our proposals (even if some of them are as good as Λ CDM by using Bayesian Evidence comparison), we think that our novel approach could provide a different perspective for the development of new and observationally reliable alternative models of gravity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160000218','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160000218"><span>Thermal Signature Identification System (TheSIS)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Merritt, Scott; Bean, Brian</p> <p>2015-01-01</p> <p>We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..280a2015H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..280a2015H"><span>Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, Xiaobao; Li, Huacong; Jia, Qiusheng</p> <p>2017-12-01</p> <p>For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4642928','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4642928"><span>Strain Interactions as a Mechanism for Dominant Strain Alternation and Incidence Oscillation in Infectious Diseases: Seasonal Influenza as a Case Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Xu-Sheng</p> <p>2015-01-01</p> <p>Background Many human infectious diseases are caused by pathogens that have multiple strains and show oscillation in infection incidence and alternation of dominant strains which together are referred to as epidemic cycling. Understanding the underlying mechanisms of epidemic cycling is essential for forecasting outbreaks of epidemics and therefore important for public health planning. Current theoretical effort is mainly focused on the factors that are extrinsic to the pathogens themselves (“extrinsic factors”) such as environmental variation and seasonal change in human behaviours and susceptibility. Nevertheless, co-circulation of different strains of a pathogen was usually observed and thus strains interact with one another within concurrent infection and during sequential infection. The existence of these intrinsic factors is common and may be involved in the generation of epidemic cycling of multi-strain pathogens. Methods and Findings To explore the mechanisms that are intrinsic to the pathogens themselves (“intrinsic factors”) for epidemic cycling, we consider a multi-strain SIRS model including cross-immunity and infectivity enhancement and use seasonal influenza as an example to parameterize the model. The Kullback-Leibler information distance was calculated to measure the match between the model outputs and the typical features of seasonal flu (an outbreak duration of 11 weeks and an annual attack rate of 15%). Results show that interactions among strains can generate seasonal influenza with these characteristic features, provided that: the infectivity of a single strain within concurrent infection is enhanced 2−7 times that within a single infection; cross-immunity as a result of past infection is 0.5–0.8 and lasts 2–9 years; while other parameters are within their widely accepted ranges (such as a 2–3 day infectious period and the basic reproductive number of 1.8–3.0). Moreover, the observed alternation of the dominant strain among epidemics emerges naturally from the best fit model. Alternative modelling that also includes seasonal forcing in transmissibility shows that both external mechanisms (i.e. seasonal forcing) and the intrinsic mechanisms (i.e., strain interactions) are equally able to generate the observed time-series in seasonal flu. Conclusions The intrinsic mechanism of strain interactions alone can generate the observed patterns of seasonal flu epidemics, but according to Kullback-Leibler information distance the importance of extrinsic mechanisms cannot be excluded. The intrinsic mechanism illustrated here to explain seasonal flu may also apply to other infectious diseases caused by polymorphic pathogens. PMID:26562668</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.S41D..01L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.S41D..01L"><span>Resolution and Trade-offs in Finite Fault Inversions for Large Earthquakes Using Teleseismic Signals (Invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lay, T.; Ammon, C. J.</p> <p>2010-12-01</p> <p>An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ACP....1512283L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ACP....1512283L"><span>Modeling particle nucleation and growth over northern California during the 2010 CARES campaign</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.</p> <p>2015-11-01</p> <p>Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007JMS....66..195F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007JMS....66..195F"><span>Application of new parameterizations of gas transfer velocity and their impact on regional and global marine CO 2 budgets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fangohr, Susanne; Woolf, David K.</p> <p>2007-06-01</p> <p>One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes independently. We then apply our parameterization to a monthly TOPEX altimeter gridded 1.5° × 1.5° data set and compare our results to transfer velocities calculated using the popular wind-based k parameterizations by Wanninkhof [Wanninkhof, R., 1992. Relationship between wind speed and gas exchange over the ocean. J. Geophys. Res., 97: 7373-7382.] and Wanninkhof and McGillis [Wanninkhof, R. and McGillis, W., 1999. A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13): 1889-1892]. We show that despite good agreement of the globally averaged transfer velocities, global and regional fluxes differ by up to 100%. These discrepancies are a result of different spatio-temporal distributions of the processes involved in the parameterizations of k, indicating the importance of wave field parameters and a need for further validation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.3997G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.3997G"><span>Mixing parametrizations for ocean climate modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir</p> <p>2016-04-01</p> <p>The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ACP....15..393D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ACP....15..393D"><span>Integrating laboratory and field data to quantify the immersion freezing ice nucleation activity of mineral dust particles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>DeMott, P. J.; Prenni, A. J.; McMeeking, G. R.; Sullivan, R. C.; Petters, M. D.; Tobo, Y.; Niemand, M.; Möhler, O.; Snider, J. R.; Wang, Z.; Kreidenweis, S. M.</p> <p>2015-01-01</p> <p>Data from both laboratory studies and atmospheric measurements are used to develop an empirical parameterization for the immersion freezing activity of natural mineral dust particles. Measurements made with the Colorado State University (CSU) continuous flow diffusion chamber (CFDC) when processing mineral dust aerosols at a nominal 105% relative humidity with respect to water (RHw) are taken as a measure of the immersion freezing nucleation activity of particles. Ice active frozen fractions vs. temperature for dusts representative of Saharan and Asian desert sources were consistent with similar measurements in atmospheric dust plumes for a limited set of comparisons available. The parameterization developed follows the form of one suggested previously for atmospheric particles of non-specific composition in quantifying ice nucleating particle concentrations as functions of temperature and the total number concentration of particles larger than 0.5 μm diameter. Such an approach does not explicitly account for surface area and time dependencies for ice nucleation, but sufficiently encapsulates the activation properties for potential use in regional and global modeling simulations, and possible application in developing remote sensing retrievals for ice nucleating particles. A calibration factor is introduced to account for the apparent underestimate (by approximately 3, on average) of the immersion freezing fraction of mineral dust particles for CSU CFDC data processed at an RHw of 105% vs. maximum fractions active at higher RHw. Instrumental factors that affect activation behavior vs. RHw in CFDC instruments remain to be fully explored in future studies. Nevertheless, the use of this calibration factor is supported by comparison to ice activation data obtained for the same aerosols from Aerosol Interactions and Dynamics of the Atmosphere (AIDA) expansion chamber cloud parcel experiments. Further comparison of the new parameterization, including calibration correction, to predictions of the immersion freezing surface active site density parameterization for mineral dust particles, developed separately from AIDA experimental data alone, shows excellent agreement for data collected in a descent through a Saharan aerosol layer. These studies support the utility of laboratory measurements to obtain atmospherically relevant data on the ice nucleation properties of dust and other particle types, and suggest the suitability of considering all mineral dust as a single type of ice nucleating particle as a useful first-order approximation in numerical modeling investigations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1227905-integrating-laboratory-field-data-quantify-immersion-freezing-ice-nucleation-activity-mineral-dust-particles','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1227905-integrating-laboratory-field-data-quantify-immersion-freezing-ice-nucleation-activity-mineral-dust-particles"><span>Integrating laboratory and field data to quantify the immersion freezing ice nucleation activity of mineral dust particles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>DeMott, P. J.; Prenni, A. J.; McMeeking, G. R.; ...</p> <p>2015-01-13</p> <p>Data from both laboratory studies and atmospheric measurements are used to develop an empirical parameterization for the immersion freezing activity of natural mineral dust particles. Measurements made with the Colorado State University (CSU) continuous flow diffusion chamber (CFDC) when processing mineral dust aerosols at a nominal 105% relative humidity with respect to water (RH w) are taken as a measure of the immersion freezing nucleation activity of particles. Ice active frozen fractions vs. temperature for dusts representative of Saharan and Asian desert sources were consistent with similar measurements in atmospheric dust plumes for a limited set of comparisons available. Themore » parameterization developed follows the form of one suggested previously for atmospheric particles of non-specific composition in quantifying ice nucleating particle concentrations as functions of temperature and the total number concentration of particles larger than 0.5 μm diameter. Such an approach does not explicitly account for surface area and time dependencies for ice nucleation, but sufficiently encapsulates the activation properties for potential use in regional and global modeling simulations, and possible application in developing remote sensing retrievals for ice nucleating particles. A calibration factor is introduced to account for the apparent underestimate (by approximately 3, on average) of the immersion freezing fraction of mineral dust particles for CSU CFDC data processed at an RH w of 105% vs. maximum fractions active at higher RH w. Instrumental factors that affect activation behavior vs. RH w in CFDC instruments remain to be fully explored in future studies. Nevertheless, the use of this calibration factor is supported by comparison to ice activation data obtained for the same aerosols from Aerosol Interactions and Dynamics of the Atmosphere (AIDA) expansion chamber cloud parcel experiments. Further comparison of the new parameterization, including calibration correction, to predictions of the immersion freezing surface active site density parameterization for mineral dust particles, developed separately from AIDA experimental data alone, shows excellent agreement for data collected in a descent through a Saharan aerosol layer. These studies support the utility of laboratory measurements to obtain atmospherically relevant data on the ice nucleation properties of dust and other particle types, and suggest the suitability of considering all mineral dust as a single type of ice nucleating particle as a useful first-order approximation in numerical modeling investigations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70193569','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70193569"><span>Chemical mixtures in potable water in the U.S.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Ryker, Sarah J.</p> <p>2014-01-01</p> <p>In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.2598D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.2598D"><span>Subgrid-scale parameterization and low-frequency variability: a response theory approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Demaeyer, Jonathan; Vannitsem, Stéphane</p> <p>2016-04-01</p> <p>Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4390286','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4390286"><span>Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nigh, Gordon</p> <p>2015-01-01</p> <p>Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApJ...843...65R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApJ...843...65R"><span>Constraints to Dark Energy Using PADE Parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.</p> <p>2017-07-01</p> <p>We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18988249','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18988249"><span>Electronegativity equalization method: parameterization and validation for organic molecules using the Merz-Kollman-Singh charge distribution scheme.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav</p> <p>2009-05-01</p> <p>The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1197091-comments-unified-representation-deep-moist-convection-numerical-modeling-atmosphere-part','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1197091-comments-unified-representation-deep-moist-convection-numerical-modeling-atmosphere-part"><span>Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man</p> <p>2015-06-01</p> <p>Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140000877','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140000877"><span>Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yao, Mao-Sung; Cheng, Ye</p> <p>2013-01-01</p> <p>The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A52C..01H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A52C..01H"><span>Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.</p> <p>2016-12-01</p> <p>Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140009181','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140009181"><span>Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.</p> <p>2013-01-01</p> <p>Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A53G..02T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A53G..02T"><span>Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Teixeira, J.</p> <p>2015-12-01</p> <p>Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.B11D0041B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.B11D0041B"><span>A Testbed for Model Development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berry, J. A.; Van der Tol, C.; Kornfeld, A.</p> <p>2014-12-01</p> <p>Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.5084Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.5084Y"><span>Approaches for Subgrid Parameterization: Does Scaling Help?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yano, Jun-Ichi</p> <p>2016-04-01</p> <p>Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1245559','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1245559"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, Yu; Gao, Kai; Huang, Lianjie</p> <p></p> <p>Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70156592','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70156592"><span>Development of a bioenergetics model for age-0 American Shad</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Sauter, Sally T.</p> <p>2011-01-01</p> <p>Bioenergetics modeling can be used as a tool to investigate the impact of non-native age-0 American shad (<i>Alosa sapidissima</i>) on reservoir and estuary food webs. The model can increase our understanding of how these fish influence lower trophic levels as well as predatory fish populations that feed on juvenile salmonids. Bioenergetics modeling can be used to investigate ecological processes, evaluate alternative research hypotheses, provide decision support, and quantitative prediction. Bioenergetics modeling has proven to be extremely useful in fisheries research (Ney et al. 1993,Chips and Wahl 2008, Petersen et al. 2008). If growth and diet parameters are known, the bioenergetics model can be used to quantify the relative amount of zooplankton or insects consumed by age-0 American shad. When linked with spatial and temporal information on fish abundance, model output can guide inferential hypothesis development to demonstrate where the greatest impacts of age-0 American shad might occur.</p> <br/> <p>Bioenergetics modeling is particularly useful when research questions involve multiple species and trophic levels (e.g. plankton communities). Bioenergetics models are mass-balance equations where the energy acquired from food is partitioned between maintenance costs, waste products, and growth (Winberg 1956). Specifically, the Wisconsin bioenergetics model (Hanson et al. 1997) is widely used in fisheries science. Researchers have extensively tested, reviewed, and improved on this modeling approach for over 30 years (Petersen et al. 2008). Development of a bioenergetics model for any species requires three key components: 1) determine physiological parameters for the model through laboratory experiments or incorporate data from a closely related species, 2) corroboration of the model with growth and consumption estimates from independent research, and 3) error analysis of model parameters.</p> <br/> <p>Wisconsin bioenergetics models have been parameterized for many of the salmonids and predatory fishes encountered in the lower Columbia River (Petersen and Ward 1999). The Wisconsin bioenergetics model has not been developed for American shad, however Limburg (1996) parameterized a simplified bioenergetics growth model for this species. A common application for the Wisconsin bioenergetics model is to estimate the consumption or growth of a fish population under different temperature and feeding scenarios (Ney 1993). One advantage of the bioenergetics approach is that consumption can be estimated without direct field measurements of predation rate (prey·predator<sup>-1</sup>· day<sup>-1</sup>; Petersen and Ward 1999). Field estimates of fish consumption are time consuming and costly to determine, and estimates may show wide variance due to environmental and sampling variability. However, the consumption parameters used in a newly developed bioenergetics model must be verified with field and laboratory estimates of consumption (Ney 1993).</p> <br/> <p>The objective of this research was to parameterize a Wisconsin bioenergetics model for age-0 American shad using published physiological data on American shad and closely related alosine species. The American shad bioenergetics model will be used as a tool to explore various hypotheses about how age-0 American shad directly and indirectly affect Columbia River salmon through ecological interactions in lower Columbia River food webs. One over-arching focus of the larger research project was to identify potential interactions between age-0 American shad and juvenile salmonids, addressing potential outcomes through bioenergetics modeling scenarios. This report contains two bioenergetics modeling applications to demonstrate how these models can be used to address management questions and direct research effort. The first modeling application uses the American shad bioenergetics model described in this report to explore prey consumption by age-0 American shad (Chapter 1, this report). Dietary data on age-0 American shad and previously published reports on the diet of juvenile fall Chinook salmon (Rondorf et al. 1990, USGS unpublished data) suggested there might be considerable dietary overlap between these species in the lower Columbia River. The U.S. Geological Survey (USGS) was interested in using the American shad bioenergetics model to explore hypotheses concerning dietary overlap between age-0 American shad and emigrating fall Chinook salmon. The second modeling application uses the fall Chinook salmon bioenergetics model (Koehler et al. 2006) to explore the growth potential of juvenile fall Chinook salmon predating on age-0 American shad in the lower Columbia River. This modeling was based dietary information on a small number of age-0 fall Chinook salmon (<i>n</i> = 13) collected in John Day Reservoir in 1994 - 1996 (unpublished USGS data). Analysis of this dietary data found that these salmonids were feeding primarily on age-0 American shad (> 75% by weight).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032454','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032454"><span>Data error and highly parameterized groundwater models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hill, M.C.</p> <p>2008-01-01</p> <p>Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900032313&hterms=theoretical+model+conceptual+model&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dtheoretical%2Bmodel%2Bconceptual%2Bmodel','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900032313&hterms=theoretical+model+conceptual+model&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dtheoretical%2Bmodel%2Bconceptual%2Bmodel"><span>A scheme for parameterizing ice cloud water content in general circulation models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Heymsfield, Andrew J.; Donner, Leo J.</p> <p>1989-01-01</p> <p>A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4415763','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4415763"><span>A Bayesian Ensemble Approach for Epidemiological Projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lindström, Tom; Tildesley, Michael; Webb, Colleen</p> <p>2015-01-01</p> <p>Mathematical models are powerful tools for epidemiology and can be used to compare control actions. However, different models and model parameterizations may provide different prediction of outcomes. In other fields of research, ensemble modeling has been used to combine multiple projections. We explore the possibility of applying such methods to epidemiology by adapting Bayesian techniques developed for climate forecasting. We exemplify the implementation with single model ensembles based on different parameterizations of the Warwick model run for the 2001 United Kingdom foot and mouth disease outbreak and compare the efficacy of different control actions. This allows us to investigate the effect that discrepancy among projections based on different modeling assumptions has on the ensemble prediction. A sensitivity analysis showed that the choice of prior can have a pronounced effect on the posterior estimates of quantities of interest, in particular for ensembles with large discrepancy among projections. However, by using a hierarchical extension of the method we show that prior sensitivity can be circumvented. We further extend the method to include a priori beliefs about different modeling assumptions and demonstrate that the effect of this can have different consequences depending on the discrepancy among projections. We propose that the method is a promising analytical tool for ensemble modeling of disease outbreaks. PMID:25927892</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/sir/2010/5159/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/sir/2010/5159/"><span>Using prediction uncertainty analysis to design hydrologic monitoring networks: Example applications from the Great Lakes water availability pilot project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Fienen, Michael N.; Doherty, John E.; Hunt, Randall J.; Reeves, Howard W.</p> <p>2010-01-01</p> <p>The importance of monitoring networks for resource-management decisions is becoming more recognized, in both theory and application. Quantitative computer models provide a science-based framework to evaluate the efficacy and efficiency of existing and possible future monitoring networks. In the study described herein, two suites of tools were used to evaluate the worth of new data for specific predictions, which in turn can support efficient use of resources needed to construct a monitoring network. The approach evaluates the uncertainty of a model prediction and, by using linear propagation of uncertainty, estimates how much uncertainty could be reduced if the model were calibrated with addition information (increased a priori knowledge of parameter values or new observations). The theoretical underpinnings of the two suites of tools addressing this technique are compared, and their application to a hypothetical model based on a local model inset into the Great Lakes Water Availability Pilot model are described. Results show that meaningful guidance for monitoring network design can be obtained by using the methods explored. The validity of this guidance depends substantially on the parameterization as well; hence, parameterization must be considered not only when designing the parameter-estimation paradigm but also-importantly-when designing the prediction-uncertainty paradigm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130011343','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130011343"><span>Optimal Recursive Digital Filters for Active Bending Stabilization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Orr, Jeb S.</p> <p>2013-01-01</p> <p>In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.A11C0088P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.A11C0088P"><span>Impact of capturing rainfall scavenging intermittency using cloud superparameterization on simulated continental scale wildfire smoke transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pritchard, M. S.; Kooperman, G. J.; Zhao, Z.; Wang, M.; Russell, L. M.; Somerville, R. C.; Ghan, S. J.</p> <p>2011-12-01</p> <p>Evaluating the fidelity of new aerosol physics in climate models is confounded by uncertainties in source emissions, systematic error in cloud parameterizations, and inadequate sampling of long-range plume concentrations. To explore the degree to which cloud parameterizations distort aerosol processing and scavenging, the Pacific Northwest National Laboratory (PNNL) Aerosol-Enabled Multi-Scale Modeling Framework (AE-MMF), a superparameterized branch of the Community Atmosphere Model Version 5 (CAM5), is applied to represent the unusually active and well sampled North American wildfire season in 2004. In the AE-MMF approach, the evolution of double moment aerosols in the exterior global resolved scale is linked explicitly to convective statistics harvested from an interior cloud resolving scale. The model is configured in retroactive nudged mode to observationally constrain synoptic meteorology, and Arctic wildfire activity is prescribed at high space/time resolution using data from the Global Fire Emissions Database. Comparisons against standard CAM5 bracket the effect of superparameterization to isolate the role of capturing rainfall intermittency on the bulk characteristics of 2004 Arctic plume transport. Ground based lidar and in situ aircraft wildfire plume constraints from the International Consortium for Atmospheric Research on Transport and Transformation field campaign are used as a baseline for model evaluation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1406754-regulation-structured-dynamic-metabolic-model-provides-potential-mechanism-delayed-enzyme-response-denitrification-process','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1406754-regulation-structured-dynamic-metabolic-model-provides-potential-mechanism-delayed-enzyme-response-denitrification-process"><span>Regulation-Structured Dynamic Metabolic Model Provides a Potential Mechanism for Delayed Enzyme Response in Denitrification Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Song, Hyun-Seob; Thomas, Dennis G.; Stegen, James C.</p> <p></p> <p>In a recent study of denitrification dynamics in hyporheic zone sediments, we observed a significant time lag (up to several days) in enzymatic response to the changes in substrate concentration. To explore an underlying mechanism and understand the interactive dynamics between enzymes and nutrients, we developed a trait-based model that associates a community’s traits with functional enzymes, instead of typically used species guilds (or functional guilds). This enzyme-based formulation allows to collectively describe biogeochemical functions of microbial communities without directly parameterizing the dynamics of species guilds, therefore being scalable to complex communities. As a key component of modeling, we accountedmore » for microbial regulation occurring through transcriptional and translational processes, the dynamics of which was parameterized based on the temporal profiles of enzyme concentrations measured using a new signature peptide-based method. The simulation results using the resulting model showed several days of a time lag in enzymatic responses as observed in experiments. Further, the model showed that the delayed enzymatic reactions could be primarily controlled by transcriptional responses and that the dynamics of transcripts and enzymes are closely correlated. The developed model can serve as a useful tool for predicting biogeochemical processes in natural environments, either independently or through integration with hydrologic flow simulators.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1147169','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1147169"><span>Final report on "Modeling Diurnal Variations of California Land Biosphere CO2 Fluxes"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Fung, Inez</p> <p></p> <p>In Mediterranean climates, the season of water availability (winter) is out of phase with the season of light availability and atmospheric demand for moisture (summer). Multi-year half-hourly observations of sap flow velocities in 26 evergreen trees in a small watershed in Northern California show that different species of evergreen trees have different seasonalities of transpiration: Douglas-firs respond immediately to the first winter rain, while Pacific madrones have peak transpiration in the dry summer. Using these observations, we have derived species-specific parameterization of normalized sap flow velocities in terms of insolation, vapor pressure deficit and near-surface soil moisture. A simple 1-Dmore » boundary layer model showed that afternoon temperatures may be higher by 1 degree Celsius in an area with Douglas-firs than with Pacific madrones. The results point to the need to develop a new representation of subsurface moisture, in particular pools beneath the organic soil mantle and the vadose zone. Our ongoing and future work includes coupling our new parameterization of transpiration with new representation of sub-surface moisture in saprolite and weathered bedrock. The results will be implemented in a regional climate model to explore vegetation-climate feedbacks, especially in the dry season.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24496219','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24496219"><span>Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bateson, Thomas F; Kopylev, Leonid</p> <p>2015-01-01</p> <p>Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013TCD.....7.2153H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013TCD.....7.2153H"><span>Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Habermann, M.; Truffer, M.; Maxwell, D.</p> <p>2013-06-01</p> <p>Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We investigate the basal conditions throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a Shallow Shelf Approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L-curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of basal yield stress in the first 7 km of the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.H31A0591R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.H31A0591R"><span>The Effect of Subsurface Parameterizations on Modeled Flows in the Catchment Land Surface Model, Fortuna 2.5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roningen, J. M.; Eylander, J. B.</p> <p>2014-12-01</p> <p>Groundwater use and management is subject to economic, legal, technical, and informational constraints and incentives at a variety of spatial and temporal scales. Planned and de facto management practices influenced by tax structures, legal frameworks, and agricultural and trade policies that vary at the country scale may have medium- and long-term effects on the ability of a region to support current and projected agricultural and industrial development. USACE is working to explore and develop global-scale, physically-based frameworks to serve as a baseline for hydrologic policy comparisons and consequence assessment, and such frameworks must include a reasonable representation of groundwater systems. To this end, we demonstrate the effects of different subsurface parameterizations, scaling, and meteorological forcings on surface and subsurface components of the Catchment Land Surface Model Fortuna v2.5 (Koster et al. 2000). We use the Land Information System 7 (Kumar et al. 2006) to process model runs using meteorological components of the Air Force Weather Agency's AGRMET forcing data from 2006 through 2011. Seasonal patterns and trends are examined in areas of the Upper Nile basin, northern China, and the Mississippi Valley. We also discuss the relevance of the model's representation of the catchment deficit with respect to local hydrogeologic structures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.tmp...89P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.tmp...89P"><span>Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu</p> <p>2018-03-01</p> <p>Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1427381-elastic-full-waveform-inversion-parameterization-analysis-applied-walk-away-vertical-seismic-profile-data-unconventional-heavy-oil-reservoir-characterization','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1427381-elastic-full-waveform-inversion-parameterization-analysis-applied-walk-away-vertical-seismic-profile-data-unconventional-heavy-oil-reservoir-characterization"><span>Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu</p> <p>2018-03-06</p> <p>We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1427381-elastic-full-waveform-inversion-parameterization-analysis-applied-walk-away-vertical-seismic-profile-data-unconventional-heavy-oil-reservoir-characterization','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1427381-elastic-full-waveform-inversion-parameterization-analysis-applied-walk-away-vertical-seismic-profile-data-unconventional-heavy-oil-reservoir-characterization"><span>Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu</p> <p></p> <p>We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........17X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........17X"><span>Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xie, Xin</p> <p></p> <p>Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JGRD..11419302C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JGRD..11419302C"><span>Parameterization of plume chemistry into large-scale atmospheric models: Application to aircraft NOx emissions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.</p> <p>2009-10-01</p> <p>A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMAE33B0341B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMAE33B0341B"><span>An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Basarab, B.; Fuchs, B.; Rutledge, S. A.</p> <p>2013-12-01</p> <p>Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JGRD..11721107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JGRD..11721107S"><span>A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Swenson, S. C.; Lawrence, D. M.</p> <p>2011-11-01</p> <p>One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..11721107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..11721107S"><span>A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Swenson, S. C.; Lawrence, D. M.</p> <p>2012-11-01</p> <p>One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1184955','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1184955"><span>Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai</p> <p></p> <p>In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1184955-effects-pre-existing-ice-crystals-cirrus-clouds-comparison-between-different-ice-nucleation-parameterizations-community-atmosphere-model-cam5','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1184955-effects-pre-existing-ice-crystals-cirrus-clouds-comparison-between-different-ice-nucleation-parameterizations-community-atmosphere-model-cam5"><span>Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai</p> <p>2015-02-11</p> <p>In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4787623','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4787623"><span>Effective degrees of freedom: a flawed metaphor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Janson, Lucas; Fithian, William; Hastie, Trevor J.</p> <p>2015-01-01</p> <p>Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890013450','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890013450"><span>Renormalization group analysis of turbulence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Smith, Leslie M.</p> <p>1989-01-01</p> <p>The objective is to understand and extend a recent theory of turbulence based on dynamic renormalization group (RNG) techniques. The application of RNG methods to hydrodynamic turbulence was explored most extensively by Yakhot and Orszag (1986). An eddy viscosity was calculated which was consistent with the Kolmogorov inertial range by systematic elimination of the small scales in the flow. Further, assumed smallness of the nonlinear terms in the redefined equations for the large scales results in predictions for important flow constants such as the Kolmogorov constant. It is emphasized that no adjustable parameters are needed. The parameterization of the small scales in a self-consistent manner has important implications for sub-grid modeling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.5671S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.5671S"><span>Scaling of hydrologic and erosion parameters derived from rainfall simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher</p> <p>2010-05-01</p> <p>Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ACPD...1519729L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ACPD...1519729L"><span>Modeling particle nucleation and growth over northern California during the 2010 CARES campaign</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.</p> <p>2015-07-01</p> <p>Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4 while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapors parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates were predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. Differences among the three simulations for the 40-100 nm particle diameter range are mostly associated with the timing of the peak total tendencies that shift the morning increase and afternoon decrease in particle number concentration by up to two hours. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ∼ 36 %.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840038183&hterms=planetary+boundaries&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dplanetary%2Bboundaries','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840038183&hterms=planetary+boundaries&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dplanetary%2Bboundaries"><span>The parameterization of the planetary boundary layer in the UCLA general circulation model - Formulation and results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Suarez, M. J.; Arakawa, A.; Randall, D. A.</p> <p>1983-01-01</p> <p>A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1422431-impact-physics-parameterization-ordering-global-atmosphere-model','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1422431-impact-physics-parameterization-ordering-global-atmosphere-model"><span>Impact of Physics Parameterization Ordering in a Global Atmosphere Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Donahue, Aaron S.; Caldwell, Peter M.</p> <p>2018-02-02</p> <p>Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ThCFD..27..133D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ThCFD..27..133D"><span>Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.</p> <p>2013-02-01</p> <p>In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAMES..10..481D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAMES..10..481D"><span>Impact of Physics Parameterization Ordering in a Global Atmosphere Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Donahue, Aaron S.; Caldwell, Peter M.</p> <p>2018-02-01</p> <p>Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030520','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030520"><span>Empirical parameterization of setup, swash, and runup</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.</p> <p>2006-01-01</p> <p>Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1342296-comparison-observed-simulated-spatial-patterns-ice-microphysical-processes-tropical-oceanic-mesoscale-convective-systems-ice-microphysics-midlevel-inflow','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1342296-comparison-observed-simulated-spatial-patterns-ice-microphysical-processes-tropical-oceanic-mesoscale-convective-systems-ice-microphysics-midlevel-inflow"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Barnes, Hannah C.; Houze, Robert A.</p> <p></p> <p>To equitably compare the spatial pattern of ice microphysical processes produced by three microphysical parameterizations with each other, observations, and theory, simulations of tropical oceanic mesoscale convective systems (MCSs) in the Weather Research and Forecasting (WRF) model were forced to develop the same mesoscale circulations as observations by assimilating radial velocity data from a Doppler radar. The same general layering of microphysical processes was found in observations and simulations with deposition anywhere above the 0°C level, aggregation at and above the 0°C level, melting at and below the 0°C level, and riming near the 0°C level. Thus, this study ismore » consistent with the layered ice microphysical pattern portrayed in previous conceptual models and indicated by dual-polarization radar data. Spatial variability of riming in the simulations suggests that riming in the midlevel inflow is related to convective-scale vertical velocity perturbations. Finally, this study sheds light on limitations of current generally available bulk microphysical parameterizations. In each parameterization, the layers in which aggregation and riming took place were generally too thick and the frequency of riming was generally too high compared to the observations and theory. Additionally, none of the parameterizations produced similar details in every microphysical spatial pattern. Discrepancies in the patterns of microphysical processes between parameterizations likely factor into creating substantial differences in model reflectivity patterns. It is concluded that improved parameterizations of ice-phase microphysics will be essential to obtain reliable, consistent model simulations of tropical oceanic MCSs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1422431-impact-physics-parameterization-ordering-global-atmosphere-model','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1422431-impact-physics-parameterization-ordering-global-atmosphere-model"><span>Impact of Physics Parameterization Ordering in a Global Atmosphere Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Donahue, Aaron S.; Caldwell, Peter M.</p> <p></p> <p>Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.C33E..01L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.C33E..01L"><span>Straddling Interdisciplinary Seams: Working Safely in the Field, Living Dangerously With a Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Light, B.; Roberts, A.</p> <p>2016-12-01</p> <p>Many excellent proposals for observational work have included language detailing how the proposers will appropriately archive their data and publish their results in peer-reviewed literature so that they may be readily available to the modeling community for parameterization development. While such division of labor may be both practical and inevitable, the assimilation of observational results and the development of observationally-based parameterizations of physical processes require care and feeding. Key questions include: (1) Is an existing parameterization accurate, consistent, and general? If not, it may be ripe for additional physics. (2) Do there exist functional working relationships between human modeler and human observationalist? If not, one or more may need to be initiated and cultivated. (3) If empirical observation and model development are a chicken/egg problem, how, given our lack of prescience and foreknowledge, can we better design observational science plans to meet the eventual demands of model parameterization? (4) Will the addition of new physics "break" the model? If so, then the addition may be imperative. In the context of these questions, we will make retrospective and forward-looking assessments of a now-decade-old numerical parameterization to treat the partitioning of solar energy at the Earth's surface where sea ice is present. While this so called "Delta-Eddington Albedo Parameterization" is currently employed in the widely-used Los Alamos Sea Ice Model (CICE) and appears to be standing the tests of accuracy, consistency, and generality, we will highlight some ideas for its ongoing development and improvement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20160003594&hterms=neither+deep+shallow&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dneither%2Bdeep%2Bshallow','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20160003594&hterms=neither+deep+shallow&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dneither%2Bdeep%2Bshallow"><span>RACORO Continental Boundary Layer Cloud Investigations: 3. Separation of Parameterization Biases in Single-Column Model CAM5 Simulations of Shallow Cumulus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; Fridlind, Ann; Endo, Satoshi; Song, Hua; Feng, Sha; Toto, Tami; Li, Zhijin; Zhang, Minghua</p> <p>2015-01-01</p> <p>Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) Project has constructed case studies from the Atmospheric Radiation Measurement Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only a relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1201337-racoro-continental-boundary-layer-cloud-investigations-separation-parameterization-biases-single-column-model-cam5-simulations-shallow-cumulus','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1201337-racoro-continental-boundary-layer-cloud-investigations-separation-parameterization-biases-single-column-model-cam5-simulations-shallow-cumulus"><span>RACORO continental boundary layer cloud investigations. 3. Separation of parameterization biases in single-column model CAM5 simulations of shallow cumulus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; ...</p> <p>2015-06-19</p> <p>Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) project has constructed case studies from the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only amore » relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1425466','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1425466"><span>Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.</p> <p></p> <p>Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called “ultraparameterization” (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (~14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers.more » Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1425466-toward-low-cloud-permitting-cloud-superparameterization-explicit-boundary-layer-turbulence','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1425466-toward-low-cloud-permitting-cloud-superparameterization-explicit-boundary-layer-turbulence"><span>Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.; ...</p> <p>2017-06-19</p> <p>Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called “ultraparameterization” (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (~14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers.more » Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JAMES...9.1542P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JAMES...9.1542P"><span>Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.; Wyant, Matthew C.; Khairoutdinov, Marat</p> <p>2017-07-01</p> <p>Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called "ultraparameterization" (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (˜14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers. Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AIPC.1593..587H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AIPC.1593..587H"><span>Improving the automated optimization of profile extrusion dies by applying appropriate optimization areas and strategies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.</p> <p>2014-05-01</p> <p>The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25265281','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25265281"><span>Impacts of land cover data selection and trait parameterisation on dynamic modelling of species' range expansion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heikkinen, Risto K; Bocedi, Greta; Kuussaari, Mikko; Heliölä, Janne; Leikola, Niko; Pöyry, Juha; Travis, Justin M J</p> <p>2014-01-01</p> <p>Dynamic models for range expansion provide a promising tool for assessing species' capacity to respond to climate change by shifting their ranges to new areas. However, these models include a number of uncertainties which may affect how successfully they can be applied to climate change oriented conservation planning. We used RangeShifter, a novel dynamic and individual-based modelling platform, to study two potential sources of such uncertainties: the selection of land cover data and the parameterization of key life-history traits. As an example, we modelled the range expansion dynamics of two butterfly species, one habitat specialist (Maniola jurtina) and one generalist (Issoria lathonia). Our results show that projections of total population size, number of occupied grid cells and the mean maximal latitudinal range shift were all clearly dependent on the choice made between using CORINE land cover data vs. using more detailed grassland data from three alternative national databases. Range expansion was also sensitive to the parameterization of the four considered life-history traits (magnitude and probability of long-distance dispersal events, population growth rate and carrying capacity), with carrying capacity and magnitude of long-distance dispersal showing the strongest effect. Our results highlight the sensitivity of dynamic species population models to the selection of existing land cover data and to uncertainty in the model parameters and indicate that these need to be carefully evaluated before the models are applied to conservation planning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JGRD..11010201D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JGRD..11010201D"><span>Statistical properties of the normalized ice particle size distribution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.</p> <p>2005-05-01</p> <p>Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000) parameterization. These new parameterizations are believed to better represent particle size at global scale, owing to a better representativity of the in situ microphysical database used to derive it. We then evaluated the potential of a direct N*0-Dm relationship. While the model parameterized by temperature produces strong errors on the cloud parameters, the N*0-Dm model parameterized by radar reflectivity produces accurate cloud parameters (less than 3% bias and 16% standard deviation). This result implies that the cloud parameters can be estimated from the estimate of only one parameter of the normalized PSD (N*0 or Dm) and a radar reflectivity measurement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27069777','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27069777"><span>Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Allen, R J; Rieger, T R; Musante, C J</p> <p>2016-03-01</p> <p>Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/3180754','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/3180754"><span>A FORTRAN program for multivariate survival analysis on the personal computer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mulder, P G</p> <p>1988-01-01</p> <p>In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160000873','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160000873"><span>Parametric Modeling Investigation of a Radially-Staged Low-Emission Aviation Combustor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Heath, Christopher M.</p> <p>2016-01-01</p> <p>Aviation gas-turbine combustion demands high efficiency, wide operability and minimal trace gas emissions. Performance critical design parameters include injector geometry, combustor layout, fuel-air mixing and engine cycle conditions. The present investigation explores these factors and their impact on a radially staged low-emission aviation combustor sized for a next-generation 24,000-lbf-thrust engine. By coupling multi-fidelity computational tools, a design exploration was performed using a parameterized annular combustor sector at projected 100% takeoff power conditions. Design objectives included nitrogen oxide emission indices and overall combustor pressure loss. From the design space, an optimal configuration was selected and simulated at 7.1, 30 and 85% part-power operation, corresponding to landing-takeoff cycle idle, approach and climb segments. All results were obtained by solution of the steady-state Reynolds-averaged Navier-Stokes equations. Species concentrations were solved directly using a reduced 19-step reaction mechanism for Jet-A. Turbulence closure was obtained using a nonlinear K-epsilon model. This research demonstrates revolutionary combustor design exploration enabled by multi-fidelity physics-based simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148l4113Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148l4113Z"><span>Reinforced dynamics for enhanced sampling in large atomic and molecular systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Linfeng; Wang, Han; E, Weinan</p> <p>2018-03-01</p> <p>A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JGRD..108.4439Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JGRD..108.4439Z"><span>Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Junhua; Lohmann, Ulrike</p> <p>2003-08-01</p> <p>The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIAAJ..56.2003G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIAAJ..56.2003G"><span>Active Subspaces of Airfoil Shape Parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grey, Zachary J.; Constantine, Paul G.</p> <p>2018-05-01</p> <p>Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22420012-multifacet-graphically-contracted-function-method-ii-general-procedure-parameterization-orthogonal-matrices-its-application-arc-factors','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22420012-multifacet-graphically-contracted-function-method-ii-general-procedure-parameterization-orthogonal-matrices-its-application-arc-factors"><span>The multifacet graphically contracted function method. II. A general procedure for the parameterization of orthogonal matrices and its application to arc factors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely</p> <p>2014-08-14</p> <p>Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ R {sup m×n} in terms of the minimal number of essential parameters (φ). Both square n = m and rectangular n < m situations are examined. Two separate kinds of parameterizations are considered, one in which the individual columns of Q are distinct, and the other in which only Span(Q) is significant. The latter is relevant to chemical applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations aremore » represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and (φ) parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930068347&hterms=2060&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3D2060','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930068347&hterms=2060&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3D2060"><span>Infrared radiation parameterizations for the minor CO2 bands and for several CFC bands in the window region</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.</p> <p>1993-01-01</p> <p>Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhPro..90..291L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhPro..90..291L"><span>Effective Atomic Number, Mass Attenuation Coefficient Parameterization, and Implications for High-Energy X-Ray Cargo Inspection Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Langeveld, Willem G. J.</p> <p></p> <p>The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910058435&hterms=rolando+garcia&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3Drolando%2Bgarcia','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910058435&hterms=rolando+garcia&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3Drolando%2Bgarcia"><span>Parameterization of planetary wave breaking in the middle atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Garcia, Rolando R.</p> <p>1991-01-01</p> <p>A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950009331','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950009331"><span>Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Suarex, Max J. (Editor); Chou, Ming-Dah</p> <p>1994-01-01</p> <p>A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2011-05-13/pdf/2011-11713.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2011-05-13/pdf/2011-11713.pdf"><span>76 FR 27850 - Irish Potatoes Grown in Washington; Modification of the Rules and Regulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2011-05-13</p> <p>... continue exploring alternative marketing strategies. DATES: Effective July 1, 2011; comments received by... opportunity to continue exploring alternative marketing strategies. The authority for regulation is provided... DEPARTMENT OF AGRICULTURE Agricultural Marketing Service 7 CFR Part 946 [Doc. No. AMS-FV-11-0024...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED353220.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED353220.pdf"><span>What Next? Promoting Alternatives to Ability Grouping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wheelock, Anne; Hawley, Willis D.</p> <p></p> <p>With new knowledge and tools at their disposal, educators at all levels are exploring alternatives to ability grouping in order to improve schooling for all students. Bringing about positive results requires the development and utilization of knowledge about how ability grouping affects schools, exploration of beliefs that support grouping, and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22642388-su-alternative-parameterization-scatter-behavior-allows-significant-reduction-beam-characterization-pencil-beam-proton-therapy','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22642388-su-alternative-parameterization-scatter-behavior-allows-significant-reduction-beam-characterization-pencil-beam-proton-therapy"><span>SU-F-T-147: An Alternative Parameterization of Scatter Behavior Allows Significant Reduction of Beam Characterization for Pencil Beam Proton Therapy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Van den Heuvel, F; Fiorini, F; George, B</p> <p>2016-06-15</p> <p>Purpose: 1) To describe the characteristics of pencil beam proton dose deposition kernels in a homogenous medium using a novel parameterization. 2) To propose a method utilizing this novel parametrization to reduce the measurements and pre-computation required in commissioning a pencil beam proton therapy system. Methods: Using beam data from a clinical, pencil beam proton therapy center, Monte Carlo simulations were performed to characterize the dose depositions at a range of energies from 100.32 to 226.08 MeV in 3.6MeV steps. At each energy, the beam is defined at the surface of the phantom by a two-dimensional Normal distribution. Using FLUKA,more » the in-medium dose distribution is calculated in 200×200×350 mm cube with 1 mm{sup 3} tally volumes. The calculated dose distribution in each 200×200 slice perpendicular to the beam axis is then characterized using a symmetric alpha-stable distribution centered on the beam axis. This results in two parameters, α and γ, that completely describe shape of the distribution. In addition, the total dose deposited on each slice is calculated. The alpha-stable parameters are plotted as function of the depth in-medium, providing a representation of dose deposition along the pencil beam. We observed that these graphs are isometric through a scaling of both abscissa and ordinate map the curves. Results: Using interpolation of the scaling factors of two source curves representative of different beam energies, we predicted the parameters of a third curve at an intermediate energy. The errors are quantified by the maximal difference and provide a fit better than previous methods. The maximal energy difference between the source curves generating identical curves was 21.14MeV. Conclusion: We have introduced a novel method to parameterize the in-phantom properties of pencil beam proton dose depositions. For the case of the Knoxville IBA system, no more than nine pencil beams have to be fully characterized.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1239508-parameterized-micro-benchmarking-auto-tuning-approach-complex-applications','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1239508-parameterized-micro-benchmarking-auto-tuning-approach-complex-applications"><span>Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan</p> <p>2012-05-15</p> <p>Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A52C..01C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A52C..01C"><span>Stepping towards new parameterizations for non-canonical atmospheric surface-layer conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Calaf, M.; Margairaz, F.; Pardyjak, E.</p> <p>2017-12-01</p> <p>Representing land-atmosphere exchange processes as a lower boundary condition remains a challenge. This is partially a result of the fact that land-surface heterogeneity exists at all spatial scales and its variability does not "average" out with decreasing scales. Such variability need not rapidly blend away from the boundary thereby impacting the near-surface region of the atmosphere. Traditionally, momentum and energy fluxes linking the land surface to the flow in NWP models have been parameterized using atmospheric surface layer (ASL) similarity theory. There is ample evidence that such representation is acceptable for stationary and planar-homogeneous flows in the absence of subsidence. However, heterogeneity remains a ubiquitous feature eliciting appreciable deviations when using ASL similarity theory, especially in scalars such moisture and air temperature whose blending is less efficient when compared to momentum. The focus of this project is to quantify the effect of surface thermal heterogeneity with scales Ο(1/10) the height of the atmospheric boundary layer and characterized by uniform roughness. Such near-canonical cases describe inhomogeneous scalar transport in an otherwise planar homogeneous flow when thermal stratification is weak or absent. In this work we present a large-eddy simulation study that characterizes the effect of surface thermal heterogeneities on the atmospheric flow using the concept of dispersive fluxes. Results illustrate a regime in which the flow is mostly driven by the surface thermal heterogeneities, in which the contribution of the dispersive fluxes can account for up to 40% of the total sensible heat flux. Results also illustrate an alternative regime in which the effect of the surface thermal heterogeneities is quickly blended, and the dispersive fluxes provide instead a quantification of the flow spatial heterogeneities produced by coherent turbulent structures result of the surface shear stress. A threshold flow-dynamics parameter is introduced to differentiate dispersive fluxes driven by surface thermal heterogeneities from those induced by surface shear. We believe that results from this research are a first step in developing new parameterizations appropriate for non-canonical ASL conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JGRD..116.0T11L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JGRD..116.0T11L"><span>Testing cloud microphysics parameterizations in NCAR CAM5 with ISDAC and M-PACE observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Xiaohong; Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Shi, Xiangjun; Wang, Zhien; Lin, Wuyin; Ghan, Steven J.; Earle, Michael; Liu, Peter S. K.; Zelenyuk, Alla</p> <p>2011-01-01</p> <p>Arctic clouds simulated by the National Center for Atmospheric Research (NCAR) Community Atmospheric Model version 5 (CAM5) are evaluated with observations from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) and Mixed-Phase Arctic Cloud Experiment (M-PACE), which were conducted at its North Slope of Alaska site in April 2008 and October 2004, respectively. Model forecasts for the Arctic spring and fall seasons performed under the Cloud-Associated Parameterizations Testbed framework generally reproduce the spatial distributions of cloud fraction for single-layer boundary-layer mixed-phase stratocumulus and multilayer or deep frontal clouds. However, for low-level stratocumulus, the model significantly underestimates the observed cloud liquid water content in both seasons. As a result, CAM5 significantly underestimates the surface downward longwave radiative fluxes by 20-40 W m-2. Introducing a new ice nucleation parameterization slightly improves the model performance for low-level mixed-phase clouds by increasing cloud liquid water content through the reduction of the conversion rate from cloud liquid to ice by the Wegener-Bergeron-Findeisen process. The CAM5 single-column model testing shows that changing the instantaneous freezing temperature of rain to form snow from -5°C to -40°C causes a large increase in modeled cloud liquid water content through the slowing down of cloud liquid and rain-related processes (e.g., autoconversion of cloud liquid to rain). The underestimation of aerosol concentrations in CAM5 in the Arctic also plays an important role in the low bias of cloud liquid water in the single-layer mixed-phase clouds. In addition, numerical issues related to the coupling of model physics and time stepping in CAM5 are responsible for the model biases and will be explored in future studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A11I2000D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A11I2000D"><span>Comparisons of Cloud Properties over the Southern Ocean between In situ Observations and WRF Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>D'Alessandro, J.; Diao, M.; Wu, C.; Liu, X.</p> <p>2017-12-01</p> <p>Numerical weather models often struggle at representing clouds since small scale cloud processes must be parameterized. For example, models often utilize simple parameterizations for transitioning from liquid to ice, usually set as a function of temperature. However, supercooled liquid water (SLW) often persists at temperatures much lower than threshold values used in microphysics parameterizations. Previous observational studies of clouds over the Southern Ocean have found high frequencies of SLW (e.g., Morrison et al., 2011). Many of these studies have relied on satellite retrievals, which provide relatively low resolution observations and are often associated with large uncertainties due to assumptions of microphysical properties (e.g., particle size distributions). Recently, the NSF/NCAR O2/N2 Ratio and CO2 Airborne Southern Ocean Study (ORCAS) campaign took observations via the NSF/NCAR HIAPER research aircraft during January and February of 2016, providing in situ observations over the Southern Ocean (50°W to 92°W). We compare simulated results from the Weather Research and Forecasting (WRF) model with in situ observations from ORCAS. Differences between observations and simulations are evaluated via statistical analyses. Initial results from ORCAS reveal a high frequency of SLW at temperatures as low as -15°C, and the existence of SLW around -30°C. Recent studies have found that boundary layer clouds are underestimated by WRF in regions unaffected by cyclonic activity (Huang et al., 2014), suggesting a lack of low-level moisture due to local processes. To explore this, relative humidity distributions are examined and controlled by cloud microphysical characteristics (e.g., total water content) and relevant ambient properties (e.g., vertical velocity). A relatively low frequency of simulated SLW may in part explain the discrepancies in WRF, as cloud-top SLW results in stronger radiative cooling and turbulent motions conducive for long-lived cloud regimes. Results presented in this study will help improve our understanding of Southern Ocean clouds and the observed discrepancies seen in WRF simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70036600','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70036600"><span>Investigating the impact of surface wave breaking on modeling the trajectories of drifters in the northern Adriatic Sea during a wind-storm event</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Carniel, S.; Warner, J.C.; Chiggiato, J.; Sclavo, M.</p> <p>2009-01-01</p> <p>An accurate numerical prediction of the oceanic upper layer velocity is a demanding requirement for many applications at sea and is a function of several near-surface processes that need to be incorporated in a numerical model. Among them, we assess the effects of vertical resolution, different vertical mixing parameterization (the so-called Generic Length Scale -GLS- set of k-??, k-??, gen, and the Mellor-Yamada), and surface roughness values on turbulent kinetic energy (k) injection from breaking waves. First, we modified the GLS turbulence closure formulation in the Regional Ocean Modeling System (ROMS) to incorporate the surface flux of turbulent kinetic energy due to wave breaking. Then, we applied the model to idealized test cases, exploring the sensitivity to the above mentioned factors. Last, the model was applied to a realistic situation in the Adriatic Sea driven by numerical meteorological forcings and river discharges. In this case, numerical drifters were released during an intense episode of Bora winds that occurred in mid-February 2003, and their trajectories compared to the displacement of satellite-tracked drifters deployed during the ADRIA02-03 sea-truth campaign. Results indicted that the inclusion of the wave breaking process helps improve the accuracy of the numerical simulations, subject to an increase in the typical value of the surface roughness z0. Specifically, the best performance was obtained using ??CH = 56,000 in the Charnok formula, the wave breaking parameterization activated, k-?? as the turbulence closure model. With these options, the relative error with respect to the average distance of the drifter was about 25% (5.5 km/day). The most sensitive factors in the model were found to be the value of ??CH enhanced with respect to a standard value, followed by the adoption of wave breaking parameterization and the particular turbulence closure model selected. ?? 2009 Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040084585','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040084585"><span>Synthesizing 3D Surfaces from Parameterized Strip Charts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri</p> <p>2004-01-01</p> <p>We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5755796','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5755796"><span>A clinically parameterized mathematical model of Shigella immunity to inform vaccine design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wahid, Rezwanul; Toapanta, Franklin R.; Simon, Jakub K.; Sztein, Marcelo B.</p> <p>2018-01-01</p> <p>We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella′s lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella′s LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine. PMID:29304144</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29304144','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29304144"><span>A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B</p> <p>2018-01-01</p> <p>We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930053735&hterms=rain+storm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Drain%2Bstorm','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930053735&hterms=rain+storm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Drain%2Bstorm"><span>Radar and microphysical characteristics of convective storms simulated from a numerical model using a new microphysical parameterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ferrier, Brad S.; Tao, Wei-Kuo; Simpson, Joanne</p> <p>1991-01-01</p> <p>The basic features of a new and improved bulk-microphysical parameterization capable of simulating the hydrometeor structure of convective systems in all types of large-scale environments (with minimal adjustment of coefficients) are studied. Reflectivities simulated from the model are compared with radar observations of an intense midlatitude convective system. Simulated reflectivities using the novel four-class ice scheme with a microphysical parameterization rain distribution at 105 min are illustrated. Preliminary results indicate that this new ice scheme works efficiently in simulating midlatitude continental storms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMNG23B..06T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMNG23B..06T"><span>Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Teixeira, J.</p> <p>2013-12-01</p> <p>In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ACP....1413145H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ACP....1413145H"><span>A comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.</p> <p>2014-12-01</p> <p>A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989PhDT.......196M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989PhDT.......196M"><span>a Physical Parameterization of Snow Albedo for Use in Climate Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marshall, Susan Elaine</p> <p></p> <p>The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PMB....61.6231S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PMB....61.6231S"><span>Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony</p> <p>2016-08-01</p> <p>The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1094907-separate-physics-dynamics-experiment-spade-framework-determining-resolution-awareness-case-study-microphysics','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1094907-separate-physics-dynamics-experiment-spade-framework-determining-resolution-awareness-case-study-microphysics"><span>The Separate Physics and Dynamics Experiment (SPADE) framework for determining resolution awareness: A case study of microphysics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Gustafson, William I.; Ma, Po-Lun; Xiao, Heng</p> <p>2013-08-29</p> <p>The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100009674','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100009674"><span>Parameterizing Coefficients of a POD-Based Dynamical System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kalb, Virginia L.</p> <p>2010-01-01</p> <p>A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27486057','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27486057"><span>Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony</p> <p>2016-08-21</p> <p>The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMGC31F1163H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMGC31F1163H"><span>Are quantitative sensitivity analysis methods always reliable?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, X.</p> <p>2016-12-01</p> <p>Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1942i0030Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1942i0030Y"><span>Assessment of band gaps for alkaline-earth chalcogenides using improved Tran Blaha-modified Becke Johnson potential</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yedukondalu, N.; Kunduru, Lavanya; Roshan, S. C. Rakesh; Sainath, M.</p> <p>2018-04-01</p> <p>Assessment of band gaps for nine alkaline-earth chalcogenides namely MX (M = Ca, Sr, Ba and X = S, Se Te) compounds are reported using Tran Blaha-modified Becke Johnson (TB-mBJ) potential and its new parameterization. From the computed electronic band structures at the equilibrium lattice constants, these materials are found to be indirect band gap semiconductors at ambient conditions. The calculated band gaps are improved using TB-mBJ and its new parameterization when compared to local density approximation (LDA) and Becke Johnson potentials. We also observe that TB-mBJ new parameterization for semiconductors below 7 eV reproduces the experimental trends very well for the small band gap semiconducting alkaline-earth chalcogenides. The calculated band profiles look similar for MX compounds (electronic band structures are provided for BaS for representation purpose) using LDA and new parameterization of TB-mBJ potentials.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960000201','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960000201"><span>Cloud-radiation interactions and their parameterization in climate models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1994-01-01</p> <p>This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1104705','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1104705"><span>Parameterized reduced-order models using hyper-dual numbers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Fike, Jeffrey A.; Brake, Matthew Robert</p> <p>2013-10-01</p> <p>The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016GeoRL..4312520F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016GeoRL..4312520F"><span>Effect of a sheared flow on iceberg motion and melting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>FitzMaurice, A.; Straneo, F.; Cenedese, C.; Andres, M.</p> <p>2016-12-01</p> <p>Icebergs account for approximately half the freshwater flux into the ocean from the Greenland and Antarctic ice sheets and play a major role in the distribution of meltwater into the ocean. Global climate models distribute this freshwater by parameterizing iceberg motion and melt, but these parameterizations are presently informed by limited observations. Here we present a record of speed and draft for 90 icebergs from Sermilik Fjord, southeastern Greenland, collected in conjunction with wind and ocean velocity data over an 8 month period. It is shown that icebergs subject to strongly sheared flows predominantly move with the vertical average of the ocean currents. If, as typical in iceberg parameterizations, only the surface ocean velocity is taken into account, iceberg speed and basal melt may have errors in excess of 60%. These results emphasize the need for parameterizations to consider ocean properties over the entire iceberg draft.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910009248','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910009248"><span>A review of recent research on improvement of physical parameterizations in the GLA GCM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sud, Y. C.; Walker, G. K.</p> <p>1990-01-01</p> <p>A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.3728M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.3728M"><span>The Impact of Parameterized Convection on Climatological Precipitation in Atmospheric Global Climate Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.</p> <p>2018-04-01</p> <p>Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1333512','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1333512"><span>Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Larson, Vincent</p> <p>2016-11-25</p> <p>The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. Themore » chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020044951','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020044951"><span>Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bretherton, Christopher S.</p> <p>2002-01-01</p> <p>The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OcMod.110...49J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OcMod.110...49J"><span>A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jansen, Malte F.</p> <p>2017-02-01</p> <p>This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1326797','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1326797"><span>Final Technical Report for "Reducing tropical precipitation biases in CESM"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Larson, Vincent</p> <p></p> <p>In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870046015&hterms=solar+water+heating&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsolar%2Bwater%2Bheating','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870046015&hterms=solar+water+heating&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsolar%2Bwater%2Bheating"><span>Atmospheric solar heating rate in the water vapor bands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chou, Ming-Dah</p> <p>1986-01-01</p> <p>The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900018420','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900018420"><span>A technology assessment of alternative communications systems for the space exploration initiative</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ponchak, Denise S.; Zuzek, John E.; Whyte, Wayne A., Jr.; Spence, Rodney L.; Sohn, Philip Y.</p> <p>1990-01-01</p> <p>Telecommunications, Navigation, and Information Management (TNIM) services are vital to accomplish the ambitious goals of the Space Exploration Initiative (SEI). A technology assessment is provided for four alternative lunar and Mars operational TNIM systems based on detailed communications link analyses. The four alternative systems range from a minimum to a fully enhanced capability and use frequencies from S-band, through Ka-band, and up to optical wavelengths. Included are technology development schedules as they relate to present SEI mission architecture time frames.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=eccentricity&id=EJ771358','ERIC'); return false;" href="https://eric.ed.gov/?q=eccentricity&id=EJ771358"><span>Understanding Conic Sections Using Alternate Graph Paper</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Brown, Elizabeth M.; Jones, Elizabeth</p> <p>2006-01-01</p> <p>This article describes two alternative coordinate systems and their use in graphing conic sections. This alternative graph paper helps students explore the idea of eccentricity using the definitions of the conic sections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JAMES...9...39T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JAMES...9...39T"><span>The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel</p> <p>2017-03-01</p> <p>A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=geothermal+AND+energy&pg=4&id=ED296096','ERIC'); return false;" href="https://eric.ed.gov/?q=geothermal+AND+energy&pg=4&id=ED296096"><span>Using Alternate Energy Sources. The Illinois Plan for Industrial Education.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Illinois State Univ., Normal.</p> <p></p> <p>This guide, which is one in the "Exploration" series of curriculum guides intended to assist junior high and middle school industrial educators in helping their students explore diverse industrial situations and technologies used in industry, deals with using alternate energy sources. The following topics are covered in the individual lessons:…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=217883&keyword=definition+AND+society&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=217883&keyword=definition+AND+society&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/53059','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/53059"><span>Parameterization guidelines and considerations for hydrologic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p> R. W. Malone; G. Yagow; C. Baffaut; M.W  Gitau; Z. Qi; Devendra Amatya; P.B.   Parajuli; J.V. Bonta; T.R.  Green</p> <p>2015-01-01</p> <p> Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880062178&hterms=coulomb&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dcoulomb','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880062178&hterms=coulomb&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dcoulomb"><span>Parameterized cross sections for Coulomb dissociation in heavy-ion collisions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Norbury, John W.; Cucinotta, F. A.; Townsend, L. W.; Badavi, F. F.</p> <p>1988-01-01</p> <p>Simple parameterizations of Coulomb dissociation cross sections for use in heavy-ion transport calculations are presented and compared to available experimental dissociation data. The agreement between calculation and experiment is satisfactory considering the simplicity of the calculations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26937058','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26937058"><span>Radiative flux and forcing parameterization error in aerosol-free clear skies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M</p> <p>2015-07-16</p> <p>Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.6267S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.6267S"><span>On the usage of classical nucleation theory in predicting the impact of bacteria on weather and climate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sahyoun, Maher; Woetmann Nielsen, Niels; Havskov Sørensen, Jens; Finster, Kai; Bay Gosewinkel Karlson, Ulrich; Šantl-Temkiv, Tina; Smith Korsholm, Ulrik</p> <p>2014-05-01</p> <p>Bacteria, e.g. Pseudomonas syringae, have previously been found efficient in nucleating ice heterogeneously at temperatures close to -2°C in laboratory tests. Therefore, ice nucleation active (INA) bacteria may be involved in the formation of precipitation in mixed phase clouds, and could potentially influence weather and climate. Investigations into the impact of INA bacteria on climate have shown that emissions were too low to significantly impact the climate (Hoose et al., 2010). The goal of this study is to clarify the reason for finding the marginal impact on climate when INA bacteria were considered, by investigating the usability of ice nucleation rate parameterization based on classical nucleation theory (CNT). For this purpose, two parameterizations of heterogeneous ice nucleation were compared. Both parameterizations were implemented and tested in a 1-d version of the operational weather model (HIRLAM) (Lynch et al., 2000; Unden et al., 2002) in two different meteorological cases. The first parameterization is based on CNT and denoted CH08 (Chen et al., 2008). This parameterization is a function of temperature and the size of the IN. The second parameterization, denoted HAR13, was derived from nucleation measurements of SnomaxTM (Hartmann et al., 2013). It is a function of temperature and the number of protein complexes on the outer membranes of the cell. The fraction of cloud droplets containing each type of IN as percentage in the cloud droplets population were used and the sensitivity of cloud ice production in each parameterization was compared. In this study, HAR13 produces more cloud ice and precipitation than CH08 when the bacteria fraction increases. In CH08, the increase of the bacteria fraction leads to decreasing the cloud ice mixing ratio. The ice production using HAR13 was found to be more sensitive to the change of the bacterial fraction than CH08 which did not show a similar sensitivity. As a result, this may explain the marginal impact of IN bacteria in climate models when CH08 was used. The number of cell fragments containing proteins appears to be a more important parameter to consider than the size of the cell when parameterizing the heterogeneous freezing of bacteria.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024757','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024757"><span>Historical and projected carbon balance of mature black spruce ecosystems across north america: The role of carbon-nitrogen interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Clein, Joy S.; McGuire, A.D.; Zhang, X.; Kicklighter, D.W.; Melillo, J.M.; Wofsy, S.C.; Jarvis, P.G.; Massheder, J.M.</p> <p>2002-01-01</p> <p>The role of carbon (C) and nitrogen (N) interactions on sequestration of atmospheric CO2 in black spruce ecosystems across North America was evaluated with the Terrestrial Ecosystem Model (TEM) by applying parameterizations of the model in which C-N dynamics were either coupled or uncoupled. First, the performance of the parameterizations, which were developed for the dynamics of black spruce ecosystems at the Bonanza Creek Long-Term Ecological Research site in Alaska, were evaluated by simulating C dynamics at eddy correlation tower sites in the Boreal Ecosystem Atmosphere Study (BOREAS) for black spruce ecosystems in the northern study area (northern site) and the southern study area (southern site) with local climate data. We compared simulated monthly growing season (May to September) estimates of gross primary production (GPP), total ecosystem respiration (RESP), and net ecosystem production (NEP) from 1994 to 1997 to available field-based estimates at both sites. At the northern site, monthly growing season estimates of GPP and RESP for the coupled and uncoupled simulations were highly correlated with the field-based estimates (coupled: R2= 0.77, 0.88 for GPP and RESP; uncoupled: R2 = 0.67, 0.92 for GPP and RESP). Although the simulated seasonal pattern of NEP generally matched the field-based data, the correlations between field-based and simulated monthly growing season NEP were lower (R2 = 0.40, 0.00 for coupled and uncoupled simulations, respectively) in comparison to the correlations between field-based and simulated GPP and RESP. The annual NEP simulated by the coupled parameterization fell within the uncertainty of field-based estimates in two of three years. On the other hand, annual NEP simulated by the uncoupled parameterization only fell within the field-based uncertainty in one of three years. At the southern site, simulated NEP generally matched field-based NEP estimates, and the correlation between monthly growing season field-based and simulated NEP (R2 = 0.36, 0.20 for coupled and uncoupled simulations, respectively) was similar to the correlations at the northern site. To evaluate the role of N dynamics in C balance of black spruce ecosystems across North America, we simulated historical and projected C dynamics from 1900 to 2100 with a global-based climatology at 0.5?? resolution (latitude ?? longitude) with both the coupled and uncoupled parameterizations of TEM. From analyses at the northern site, several consistent patterns emerge. There was greater inter-annual variability in net primary production (NPP) simulated by the uncoupled parameterization as compared to the coupled parameterization, which led to substantial differences in inter-annual variability in NEP between the parameterizations. The divergence between NPP and heterotrophic respiration was greater in the uncoupled simulation, resulting in more C sequestration during the projected period. These responses were the result of fundamentally different responses of the coupled and uncoupled parameterizations to changes in CO2 and climate. Across North American black spruce ecosystems, the range of simulated decadal changes in C storage was substantially greater for the uncoupled parameterization than for the coupled parameterization. Analysis of the spatial variability in decadal responses of C dynamics revealed that C fluxes simulated by the coupled and uncoupled parameterizations have different sensitivities to climate and that the climate sensitivities of the fluxes change over the temporal scope of the simulations. The results of this study suggest that uncertainties can be reduced through (1) factorial studies focused on elucidating the role of C and N interactions in the response of mature black spruce ecosystems to manipulations of atmospheric CO2 and climate, (2) establishment of a network of continuous, long-term measurements of C dynamics across the range of mature black spruce ecosystems in North America, and (3) ancillary measureme</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.5054P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.5054P"><span>Why different gas flux velocity parameterizations result in so similar flux results in the North Atlantic?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Piskozub, Jacek; Wróbel, Iwona</p> <p>2016-04-01</p> <p>The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations. The first one is the fact that most of the k functions intersect close to 9 m/s, the typical North Atlantic wind speeds. The squared and cubed function need to intersect in order to have similar global averages. This way the higher values of cubic functions for strong winds are offset by higher values of squared ones for weak ones. The wind speed of the intersection has to be higher than global wind speed average because discrepancies between different parameterizations increase with the wind speed. The North Atlantic region seem to have by chance just the right average wind speeds to make all the parameterizations resulting in similar annual fluxes. However there is a second reason for smaller inter-parameterization discrepancies in the North Atlantic than many other ocean basins. The North Atlantic CO2 fluxes are downward in every month. In many regions of the world, the direction of the flux changes between the winter and summer with wind speeds much stronger in the cold season. We show, using the actual formulas that in such a case the differences between the parameterizations partly cancel out which is not the case when the flux never changes its direction. Both the mechanisms accidentally make the North Atlantic an area where the choice of k parameterizations causes very small flux uncertainty in annual fluxes. On the other hand, it makes the North Atlantic data not very useful for choosing the parameterizations most closely representing real fluxes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA175236','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA175236"><span>Silhouette-Slice Theorems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1986-09-01</p> <p>necessary to define "canonical" * parameterizations. Examples of proposed parameterizations are Munge N...of a slice of the surface oriented along the vector CT on the surface is given by STr -(A4.24) 11 is clear from the above expression, that when a slice</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=307256&keyword=Pollution+AND+Soil&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=307256&keyword=Pollution+AND+Soil&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1118931','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1118931"><span>Parameterization of Transport and Period Matrices with X-Y Coupling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Courant, E. D.</p> <p></p> <p>A parameterization of 4x4 matrices describing linear beam transport systems has been obtained by Edwards and Teng. Here we extend their formalism to include dispersive effects, and give perscriptions for incorporating it in the program SYNCH.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018RaPC..144..295P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018RaPC..144..295P"><span>Accuracy of parameterized proton range models; A comparison</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.</p> <p>2018-03-01</p> <p>An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890007473','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890007473"><span>R-parametrization and its role in classification of linear multivariable feedback systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, Robert T. N.</p> <p>1988-01-01</p> <p>A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1431425-implementation-generalized-actuator-line-model-wind-turbine-parameterization-weather-research-forecasting-model','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1431425-implementation-generalized-actuator-line-model-wind-turbine-parameterization-weather-research-forecasting-model"><span>Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko</p> <p></p> <p>A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020034271','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020034271"><span>Parameterization of Cloud Optical Properties for a Mixture of Ice Particles for use in Atmospheric Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)</p> <p>2002-01-01</p> <p>Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4288014','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4288014"><span>A Survey of Phase Variable Candidates of Human Locomotion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Villarreal, Dario J.; Gregg, Robert D.</p> <p>2014-01-01</p> <p>Studies show that the human nervous system is able to parameterize gait cycle phase using sensory feedback. In the field of bipedal robots, the concept of a phase variable has been successfully used to mimic this behavior by parameterizing the gait cycle in a time-independent manner. This approach has been applied to control a powered transfemoral prosthetic leg, but the proposed phase variable was limited to the stance period of the prosthesis only. In order to achieve a more robust controller, we attempt to find a new phase variable that fully parameterizes the gait cycle of a prosthetic leg. The angle with respect to a global reference frame at the hip is able to monotonically parameterize both the stance and swing periods of the gait cycle. This survey looks at multiple phase variable candidates involving the hip angle with respect to a global reference frame across multiple tasks including level-ground walking, running, and stair negotiation. In particular, we propose a novel phase variable candidate that monotonically parameterizes the whole gait cycle across all tasks, and does so particularly well across level-ground walking. In addition to furthering the design of robust robotic prosthetic leg controllers, this survey could help neuroscientists and physicians study human locomotion across tasks from a time-independent perspective. PMID:25570873</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70193648','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70193648"><span>Non-perturbational surface-wave inversion: A Dix-type relation for surface waves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Haney, Matt; Tsai, Victor C.</p> <p>2015-01-01</p> <p>We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A13E2108M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A13E2108M"><span>Prototype Mcs Parameterization for Global Climate Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moncrieff, M. W.</p> <p>2017-12-01</p> <p>Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1810951B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1810951B"><span>Gas transfer under high wind and its dependence on wave breaking and sea state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brumer, Sophia; Zappa, Christopher; Fairall, Christopher; Blomquist, Byron; Brooks, Ian; Yang, Mingxi</p> <p>2016-04-01</p> <p>Quantifying greenhouse gas fluxes on regional and global scales relies on parameterizations of the gas transfer velocity K. To first order, K is dictated by wind speed (U) and is typically parameterized as a non-linear functions of U. There is however a large spread in K predicted by the traditional parameterizations at high wind speed. This is because a large variety of environmental forcing and processes (Wind, Currents, Rain, Waves, Breaking, Surfactants, Fetch) actually influence K and wind speed alone cannot capture the variability of air-water gas exchange. At high wind speed especially, breaking waves become a key factor to take into account when estimating gas fluxes. The High Wind Gas exchange Study (HiWinGS) presents the unique opportunity to gain new insights on this poorly understood aspects of air-sea interaction under high winds. The HiWinGS cruise took place in the North Atlantic during October and November 2013. Wind speeds exceeded 15 m s-1 25% of the time, including 48 hrs with U10 > 20 m s-1. Continuous measurements of turbulent fluxes of heat, momentum, and gas (CO2, DMS, acetone and methanol) were taken from the bow of the R/V Knorr. The wave field was sampled by a wave rider buoy and breaking events were tracked in visible imagery was acquired from the port and starboard side of the flying bridge during daylight hours at 20Hz. Taking advantage of the range of physical forcing and wave conditions sampled during HiWinGS, we test existing parameterizations and explore ways of better constraining K based on whitecap coverage, sea state and breaking statistics contrasting pure windseas to swell dominated periods. We distinguish between windseas and swell based on a separation algorithm applied to directional wave spectra for mixed seas, system alignment is considered when interpreting results. The four gases sampled during HiWinGS ranged from being mostly waterside controlled to almost entirely airside controlled. While bubble-mediated transfer appears to be small for moderately soluble gases like DMS, the importance of wave breaking turbulence transport has yet to be determined for all gases regardless of their solubility. This will be addressed by correlating measured K to estimates of active whitecap fraction (WA) and turbulent kinetic energy dissipation rate (ɛ). WA and ɛ are estimated from moments of the breaking crest length distribution derived from the imagery, focusing on young seas, when it is likely that large-scale breaking waves (i.e., whitecapping) will dominate the ɛ.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900034974&hterms=bomb&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dbomb','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900034974&hterms=bomb&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dbomb"><span>The global geochemistry of bomb-produced tritium - General circulation model compared to available observations and traditional interpretations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koster, Randal D.; Broecker, Wallace S.; Jouzel, Jean; Suozzo, Robert J.; Russell, Gary L.; Rind, David</p> <p>1989-01-01</p> <p>Observational evidence suggests that of the tritium produced during nuclear bomb tests that has already reached the ocean, more than twice as much arrived through vapor impact as through precipitation. In the present study, the Goddard Institute for Space Studies 8 x 10 deg atmospheric general circulation model is used to simulate tritium transport from the upper atmosphere to the ocean. The simulation indicates that tritium delivery to the ocean via vapor impact is about equal to that via precipitation. The model result is relatively insensitive to several imposed changes in tritium source location, in model parameterizations, and in model resolution. Possible reasons for the discrepancy are explored.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930051447&hterms=big+bang+theory&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DWhat%2Bbig%2Bbang%2Btheory','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930051447&hterms=big+bang+theory&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DWhat%2Bbig%2Bbang%2Btheory"><span>Microwave anisotropies in the light of the data from the COBE satellite</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dodelson, Scott; Jubas, Jay M.</p> <p>1993-01-01</p> <p>The recent measurement of anisotropies in the cosmic microwave background by the Cosmic Background Explorer (COBE) satellite and the recent South Pole experiment offer an excellent opportunity to probe cosmological theories. We test a class of theories in which the universe today is flat and matter dominated, and primordial perturbations are adiabatic parameterized by an index n. In this class of theories the predicted signal in the South Pole experiment depends on n, the Hubble constant, and the baryon density. For n = 1 a large region of this parameter space is ruled out, but there is still a window open which satisfies constraints from COBE, the South Pole experiment, and big bang nucleosynthesis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29915325','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29915325"><span>A distributed algorithm to maintain and repair the trail networks of arboreal ants.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chandrasekhar, Arjun; Gordon, Deborah M; Navlakha, Saket</p> <p>2018-06-18</p> <p>We study how the arboreal turtle ant (Cephalotes goniodontus) solves a fundamental computing problem: maintaining a trail network and finding alternative paths to route around broken links in the network. Turtle ants form a routing backbone of foraging trails linking several nests and temporary food sources. This species travels only in the trees, so their foraging trails are constrained to lie on a natural graph formed by overlapping branches and vines in the tangled canopy. Links between branches, however, can be ephemeral, easily destroyed by wind, rain, or animal movements. Here we report a biologically feasible distributed algorithm, parameterized using field data, that can plausibly describe how turtle ants maintain the routing backbone and find alternative paths to circumvent broken links in the backbone. We validate the ability of this probabilistic algorithm to circumvent simulated breaks in synthetic and real-world networks, and we derive an analytic explanation for why certain features are crucial to improve the algorithm's success. Our proposed algorithm uses fewer computational resources than common distributed graph search algorithms, and thus may be useful in other domains, such as for swarm computing or for coordinating molecular robots.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatCC...6..493D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatCC...6..493D"><span>Increasing beef production could lower greenhouse gas emissions in Brazil if decoupled from deforestation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>de Oliveira Silva, R.; Barioni, L. G.; Hall, J. A. J.; Folegatti Matsuura, M.; Zanett Albertini, T.; Fernandes, F. A.; Moran, D.</p> <p>2016-05-01</p> <p>Recent debate about agricultural greenhouse gas emissions mitigation highlights trade-offs inherent in the way we produce and consume food, with increasing scrutiny on emissions-intensive livestock products. Although most research has focused on mitigation through improved productivity, systemic interactions resulting from reduced beef production at the regional level are still unexplored. A detailed optimization model of beef production encompassing pasture degradation and recovery processes, animal and deforestation emissions, soil organic carbon (SOC) dynamics and upstream life-cycle inventory was developed and parameterized for the Brazilian Cerrado. Economic return was maximized considering two alternative scenarios: decoupled livestock-deforestation (DLD), assuming baseline deforestation rates controlled by effective policy; and coupled livestock-deforestation (CLD), where shifting beef demand alters deforestation rates. In DLD, reduced consumption actually leads to less productive beef systems, associated with higher emissions intensities and total emissions, whereas increased production leads to more efficient systems with boosted SOC stocks, reducing both per kilogram and total emissions. Under CLD, increased production leads to 60% higher emissions than in DLD. The results indicate the extent to which deforestation control contributes to sustainable intensification in Cerrado beef systems, and how alternative life-cycle analytical approaches result in significantly different emission estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=336619&Lab=NERL&keyword=land+AND+use+AND+change&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=336619&Lab=NERL&keyword=land+AND+use+AND+change&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>A Generalized Simple Formulation of Convective Adjustment Timescale for Cumulus Convection Parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=63969&keyword=System+AND+recommendation&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=63969&keyword=System+AND+recommendation&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=221464&keyword=nucleation+AND+humidity&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=221464&keyword=nucleation+AND+humidity&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>A Comparative Study of Nucleation Parameterizations: 2. Three-Dimensional Model Application and Evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2010-04-16/pdf/2010-8804.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2010-04-16/pdf/2010-8804.pdf"><span>75 FR 19979 - National Center for Complementary and Alternative Medicine Announcement of Workshop on the...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2010-04-16</p> <p>... Complementary and Alternative Medicine Announcement of Workshop on the Deconstruction of Back Pain ACTION: Notice. SUMMARY: The National Center for Complementary and Alternative Medicine (NCCAM) invites the... Alternative Medicine (NCCAM) was established in 1999 with the mission of exploring complementary and...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/27566','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/27566"><span>Exploring alternative methods for vegetation control and maintenance along roadsides.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2003-02-01</p> <p>The search for alternative methods for controlling : and maintaining vegetation along roadsides has : just begun. This work was initiated to find : alternatives to the traditional methods for roadside : vegetation maintenance that includes the use of...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1329440-atmospheric-updrafts-key-unlocking-climate-forcing-sensitivity','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1329440-atmospheric-updrafts-key-unlocking-climate-forcing-sensitivity"><span>Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...</p> <p>2016-10-20</p> <p>Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015TCD.....9..873R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015TCD.....9..873R"><span>Parameterization of single-scattering properties of snow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.</p> <p>2015-02-01</p> <p>Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015TCry....9.1277R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015TCry....9.1277R"><span>Parameterization of single-scattering properties of snow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.</p> <p>2015-06-01</p> <p>Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020004186','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020004186"><span>Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.</p> <p>2001-01-01</p> <p>The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1329440-atmospheric-updrafts-key-unlocking-climate-forcing-sensitivity','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1329440-atmospheric-updrafts-key-unlocking-climate-forcing-sensitivity"><span>Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel</p> <p></p> <p>Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160000453','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160000453"><span>Incorporation of New Convective Ice Microphysics into the NASA GISS GCM and Impacts on Cloud Ice Water Path (IWP) Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Elsaesser, Greg; Del Genio, Anthony</p> <p>2015-01-01</p> <p>The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high latitude ice IWP that decreased by 50 relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by 15 in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of 2. This was largely driven by IWC in deep-convecting regions of the tropics.Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter.GCM simulations with the improved convective parameterization yield a 50 decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRE..123..982W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRE..123..982W"><span>Parameterization of Rocket Dust Storms on Mars in the LMD Martian GCM: Modeling Details and Validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Chao; Forget, François; Bertrand, Tanguy; Spiga, Aymeric; Millour, Ehouarn; Navarro, Thomas</p> <p>2018-04-01</p> <p>The origin of the detached dust layers observed by the Mars Climate Sounder aboard the Mars Reconnaissance Orbiter is still debated. Spiga et al. (2013, https://doi.org/10.1002/jgre.20046) revealed that deep mesoscale convective "rocket dust storms" are likely to play an important role in forming these dust layers. To investigate how the detached dust layers are generated by this mesoscale phenomenon and subsequently evolve at larger scales, a parameterization of rocket dust storms to represent the mesoscale dust convection is designed and included into the Laboratoire de Météorologie Dynamique (LMD) Martian Global Climate Model (GCM). The new parameterization allows dust particles in the GCM to be transported to higher altitudes than in traditional GCMs. Combined with the horizontal transport by large-scale winds, the dust particles spread out and form detached dust layers. During the Martian dusty seasons, the LMD GCM with the new parameterization is able to form detached dust layers. The formation, evolution, and decay of the simulated dust layers are largely in agreement with the Mars Climate Sounder observations. This suggests that mesoscale rocket dust storms are among the key factors to explain the observed detached dust layers on Mars. However, the detached dust layers remain absent in the GCM during the clear seasons, even with the new parameterization. This implies that other relevant atmospheric processes, operating when no dust storms are occurring, are needed to explain the Martian detached dust layers. More observations of local dust storms could improve the ad hoc aspects of this parameterization, such as the trigger and timing of dust injection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1132211-assessing-cam5-physics-suite-wrf-chem-model-implementation-resolution-sensitivity-first-evaluation-regional-case-study','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1132211-assessing-cam5-physics-suite-wrf-chem-model-implementation-resolution-sensitivity-first-evaluation-regional-case-study"><span>Assessing the CAM5 Physics Suite in the WRF-Chem Model: Implementation, Resolution Sensitivity, and a First Evaluation for a Regional Case Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ma, Po-Lun; Rasch, Philip J.; Fast, Jerome D.</p> <p></p> <p>A suite of physical parameterizations (deep and shallow convection, turbulent boundary layer, aerosols, cloud microphysics, and cloud fraction) from the global climate model Community Atmosphere Model version 5.1 (CAM5) has been implemented in the regional model Weather Research and Forecasting with chemistry (WRF-Chem). A downscaling modeling framework with consistent physics has also been established in which both global and regional simulations use the same emissions and surface fluxes. The WRF-Chem model with the CAM5 physics suite is run at multiple horizontal resolutions over a domain encompassing the northern Pacific Ocean, northeast Asia, and northwest North America for April 2008 whenmore » the ARCTAS, ARCPAC, and ISDAC field campaigns took place. These simulations are evaluated against field campaign measurements, satellite retrievals, and ground-based observations, and are compared with simulations that use a set of common WRF-Chem Parameterizations. This manuscript describes the implementation of the CAM5 physics suite in WRF-Chem provides an overview of the modeling framework and an initial evaluation of the simulated meteorology, clouds, and aerosols, and quantifies the resolution dependence of the cloud and aerosol parameterizations. We demonstrate that some of the CAM5 biases, such as high estimates of cloud susceptibility to aerosols and the underestimation of aerosol concentrations in the Arctic, can be reduced simply by increasing horizontal resolution. We also show that the CAM5 physics suite performs similarly to a set of parameterizations commonly used in WRF-Chem, but produces higher ice and liquid water condensate amounts and near-surface black carbon concentration. Further evaluations that use other mesoscale model parameterizations and perform other case studies are needed to infer whether one parameterization consistently produces results more consistent with observations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSV...371..370B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSV...371..370B"><span>Parameterized reduced order models from a single mesh using hyper-dual numbers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brake, M. R. W.; Fike, J. A.; Topping, S. D.</p> <p>2016-06-01</p> <p>In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1376765-cloudy-planetary-boundary-layer-oscillation-arising-from-coupling-turbulence-precipitation-climate-simulations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1376765-cloudy-planetary-boundary-layer-oscillation-arising-from-coupling-turbulence-precipitation-climate-simulations"><span>A cloudy planetary boundary layer oscillation arising from the coupling of turbulence with precipitation in climate simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zheng, X.; Klein, S. A.; Ma, H. -Y.</p> <p></p> <p>The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GMD....11.1467B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GMD....11.1467B"><span>Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.</p> <p>2018-04-01</p> <p>Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1376765-cloudy-planetary-boundary-layer-oscillation-arising-from-coupling-turbulence-precipitation-climate-simulations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1376765-cloudy-planetary-boundary-layer-oscillation-arising-from-coupling-turbulence-precipitation-climate-simulations"><span>A cloudy planetary boundary layer oscillation arising from the coupling of turbulence with precipitation in climate simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Zheng, X.; Klein, S. A.; Ma, H. -Y.; ...</p> <p>2017-08-24</p> <p>The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.A33P..05E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.A33P..05E"><span>Incorporation of New Convective Ice Microphysics into the NASA GISS GCM and Impacts on Cloud Ice Water Path (IWP) Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elsaesser, G.; Del Genio, A. D.</p> <p>2015-12-01</p> <p>The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high-latitude ice IWP that decreased by ~50% relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by ~15% in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (~200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of ~2. This was largely driven by IWC in deep-convecting regions of the tropics. Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter. GCM simulations with the improved convective parameterization yield a ~50% decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H43T..03S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H43T..03S"><span>Estimating Discharge, Depth and Bottom Friction in Sand Bed Rivers Using Surface Currents and Water Surface Elevation Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Simeonov, J.; Czapiga, M. J.; Holland, K. T.</p> <p>2017-12-01</p> <p>We developed an inversion model for river bathymetry estimation using measurements of surface currents, water surface elevation slope and shoreline position. The inversion scheme is based on explicit velocity-depth and velocity-slope relationships derived from the along-channel momentum balance and mass conservation. The velocity-depth relationship requires the discharge value to quantitatively relate the depth to the measured velocity field. The ratio of the discharge and the bottom friction enter as a coefficient in the velocity-slope relationship and is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. Completing the inversion requires an estimate of the bulk friction, which in the case of sand bed rivers is a strong function of the size of dune bedforms. We explored the accuracy of existing and new empirical closures that relate the bulk roughness to parameters such as the median grain size diameter, ratio of shear velocity to sediment fall velocity or the Froude number. For given roughness parameterization, the inversion solution is determined iteratively since the hydraulic roughness depends on the unknown depth. We first test the new hydraulic roughness parameterization using estimates of the Manning roughness in sand bed rivers based on field measurements. The coupled inversion and roughness model is then tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1441200-infrared-consistency-weak-gravity-conjecture','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1441200-infrared-consistency-weak-gravity-conjecture"><span>Infrared consistency and the weak gravity conjecture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Cheung, Clifford; Remmen, Grant N.</p> <p>2014-12-11</p> <p>The weak gravity conjecture (WGC) asserts that an Abelian gauge theory coupled to gravity is inconsistent unless it contains a particle of charge q and mass m such that q ≥ m/m Pl. This criterion is obeyed by all known ultraviolet completions and is needed to evade pathologies from stable black hole remnants. In this paper, we explore the WGC from the perspective of low-energy effective field theory. Below the charged particle threshold, the effective action describes a photon and graviton interacting via higher-dimension operators. We derive infrared consistency conditions on the parameters of the effective action using i )more » analyticity of light-by-light scattering, ii ) unitarity of the dynamics of an arbitrary ultraviolet completion, and iii ) absence of superluminality and causality violation in certain non-trivial backgrounds. For convenience, we begin our analysis in three spacetime dimensions, where gravity is non-dynamical but has a physical effect on photon-photon interactions. We then consider four dimensions, where propagating gravity substantially complicates all of our arguments, but bounds can still be derived. Operators in the effective action arise from two types of diagrams: those that involve electromagnetic interactions (parameterized by a charge-to-mass ratio q/m) and those that do not (parameterized by a coefficient γ). In conclusion, infrared consistency implies that q/m is bounded from below for small γ.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.8753D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.8753D"><span>Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim</p> <p>2016-04-01</p> <p>The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910018381','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910018381"><span>Atmospheric water vapor transport: Estimation of continental precipitation recycling and parameterization of a simple climate model. M.S. Thesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, Peter S.</p> <p>1991-01-01</p> <p>The advective transport of atmospheric water vapor and its role in global hydrology and the water balance of continental regions are discussed and explored. The data set consists of ten years of global wind and humidity observations interpolated onto a regular grid by objective analysis. Atmospheric water vapor fluxes across the boundaries of selected continental regions are displayed graphically. The water vapor flux data are used to investigate the sources of continental precipitation. The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from surrounding areas external to the region; and (2) evaporation and transpiration from the land surface recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. In a separate, but related, study estimates of ocean to land water vapor transport are used to parameterize an existing simple climate model, containing both land and ocean surfaces, that is intended to mimic the dynamics of continental climates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMNH21A1586K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMNH21A1586K"><span>SURA-IOOS Coastal Inundation Testbed Inter-Model Evaluation of Tides, Waves, and Hurricane Surge in the Gulf of Mexico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kerr, P. C.; Donahue, A.; Westerink, J. J.; Luettich, R.; Zheng, L.; Weisberg, R. H.; Wang, H. V.; Slinn, D. N.; Davis, J. R.; Huang, Y.; Teng, Y.; Forrest, D.; Haase, A.; Kramer, A.; Rhome, J.; Feyen, J. C.; Signell, R. P.; Hanson, J. L.; Taylor, A.; Hope, M.; Kennedy, A. B.; Smith, J. M.; Powell, M. D.; Cardone, V. J.; Cox, A. T.</p> <p>2012-12-01</p> <p>The Southeastern Universities Research Association (SURA), in collaboration with the NOAA Integrated Ocean Observing System program and other federal partners, developed a testbed to help accelerate progress in both research and the transition to operational use of models for both coastal and estuarine prediction. This testbed facilitates cyber-based sharing of data and tools, archival of observation data, and the development of cross-platform tools to efficiently access, visualize, skill assess, and evaluate model results. In addition, this testbed enables the modeling community to quantitatively assess the behavior (e.g., skill, robustness, execution speed) and implementation requirements (e.g. resolution, parameterization, computer capacity) that characterize the suitability and performance of selected models from both operational and fundamental science perspectives. This presentation focuses on the tropical coastal inundation component of the testbed and compares a variety of model platforms as well as grids in simulating tides, and the wave and surge environments for two extremely well documented historical hurricanes, Hurricanes Rita (2005) and Ike (2008). Model platforms included are ADCIRC, FVCOM, SELFE, SLOSH, SWAN, and WWMII. Model validation assessments were performed on simulation results using numerous station observation data in the form of decomposed harmonic constituents, water level high water marks and hydrographs of water level and wave data. In addition, execution speed, inundation extents defined by differences in wetting/drying schemes, resolution and parameterization sensitivities are also explored.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013TCry....7.1679H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013TCry....7.1679H"><span>Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Habermann, M.; Truffer, M.; Maxwell, D.</p> <p>2013-11-01</p> <p>Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We invert for basal conditions from surface velocity data throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a shallow-shelf approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice-softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of effective basal yield stress in the first 7 km upstream from the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of effective basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAP...123i4501D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAP...123i4501D"><span>Tunneling current in HfO2 and Hf0.5Zr0.5O2-based ferroelectric tunnel junction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dong, Zhipeng; Cao, Xi; Wu, Tong; Guo, Jing</p> <p>2018-03-01</p> <p>Ferroelectric tunnel junctions (FTJs) have been intensively explored for future low power data storage and information processing applications. Among various ferroelectric (FE) materials studied, HfO2 and H0.5Zr0.5O2 (HZO) have the advantage of CMOS process compatibility. The validity of the simple effective mass approximation, for describing the tunneling process in these materials, is examined by computing the complex band structure from ab initio simulations. The results show that the simple effective mass approximation is insufficient to describe the tunneling current in HfO2 and HZO materials, and quantitative accurate descriptions of the complex band structures are indispensable for calculation of the tunneling current. A compact k . p Hamiltonian is parameterized to and validated by ab initio complex band structures, which provides a method for efficiently and accurately computing the tunneling current in HfO2 and HZO. The device characteristics of a metal/FE/metal structure and a metal/FE/semiconductor (M-F-S) structure are investigated by using the non-equilibrium Green's function formalism with the parameterized effective Hamiltonian. The result shows that the M-F-S structure offers a larger resistance window due to an extra barrier in the semiconductor region at off-state. A FTJ utilizing M-F-S structure is beneficial for memory design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26265776','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26265776"><span>A Method of Accounting for Enzyme Costs in Flux Balance Analysis Reveals Alternative Pathways and Metabolite Stores in an Illuminated Arabidopsis Leaf.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheung, C Y Maurice; Ratcliffe, R George; Sweetlove, Lee J</p> <p>2015-11-01</p> <p>Flux balance analysis of plant metabolism is an established method for predicting metabolic flux phenotypes and for exploring the way in which the plant metabolic network delivers specific outcomes in different cell types, tissues, and temporal phases. A recurring theme is the need to explore the flexibility of the network in meeting its objectives and, in particular, to establish the extent to which alternative pathways can contribute to achieving specific outcomes. Unfortunately, predictions from conventional flux balance analysis minimize the simultaneous operation of alternative pathways, but by introducing flux-weighting factors to allow for the variable intrinsic cost of supporting each flux, it is possible to activate different pathways in individual simulations and, thus, to explore alternative pathways by averaging thousands of simulations. This new method has been applied to a diel genome-scale model of Arabidopsis (Arabidopsis thaliana) leaf metabolism to explore the flexibility of the network in meeting the metabolic requirements of the leaf in the light. This identified alternative flux modes in the Calvin-Benson cycle revealed the potential for alternative transitory carbon stores in leaves and led to predictions about the light-dependent contribution of alternative electron flow pathways and futile cycles in energy rebalancing. Notable features of the analysis include the light-dependent tradeoff between the use of carbohydrates and four-carbon organic acids as transitory storage forms and the way in which multiple pathways for the consumption of ATP and NADPH can contribute to the balancing of the requirements of photosynthetic metabolism with the energy available from photon capture. © 2015 American Society of Plant Biologists. All Rights Reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4809626','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4809626"><span>Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rieger, TR; Musante, CJ</p> <p>2016-01-01</p> <p>Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22766925','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22766925"><span>Carrying BioMath education in a Leaky Bucket.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Powell, James A; Kohler, Brynja R; Haefner, James W; Bodily, Janice</p> <p>2012-09-01</p> <p>In this paper, we describe a project-based mathematical lab implemented in our Applied Mathematics in Biology course. The Leaky Bucket Lab allows students to parameterize and test Torricelli's law and develop and compare their own alternative models to describe the dynamics of water draining from perforated containers. In the context of this lab students build facility in a variety of applied biomathematical tools and gain confidence in applying these tools in data-driven environments. We survey analytic approaches developed by students to illustrate the creativity this encourages as well as prepare other instructors to scaffold the student learning experience. Pedagogical results based on classroom videography support the notion that the Biology-Applied Math Instructional Model, the teaching framework encompassing the lab, is effective in encouraging and maintaining high-level cognition among students. Research-based pedagogical approaches that support the lab are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20150005564&hterms=convection&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dconvection','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20150005564&hterms=convection&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dconvection"><span>Stochastic Convection Parameterizations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios</p> <p>2012-01-01</p> <p>computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1331005','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1331005"><span>Improving Convection and Cloud Parameterization Using ARM Observations and NCAR Community Atmosphere Model CAM5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, Guang J.</p> <p>2016-11-07</p> <p>The fundamental scientific objectives of our research are to use ARM observations and the NCAR CAM5 to understand the large-scale control on convection, and to develop improved convection and cloud parameterizations for use in GCMs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=285618','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=285618"><span>Use of Rare Earth Elements in investigations of aeolian processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=63921&keyword=System+AND+recommendation&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=63921&keyword=System+AND+recommendation&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990047097','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990047097"><span>A Novel Shape Parameterization Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Samareh, Jamshid A.</p> <p>1999-01-01</p> <p>This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6141..622F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6141..622F"><span>A subdivision-based parametric deformable model for surface extraction and statistical shape modeling of the knee cartilages</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien</p> <p>2006-03-01</p> <p>Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000AIPC..532..412A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000AIPC..532..412A"><span>Actinide electronic structure and atomic forces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Albers, R. C.; Rudin, Sven P.; Trinkle, Dallas R.; Jones, M. D.</p> <p>2000-07-01</p> <p>We have developed a new method[1] of fitting tight-binding parameterizations based on functional forms developed at the Naval Research Laboratory.[2] We have applied these methods to actinide metals and report our success using them (see below). The fitting procedure uses first-principles local-density-approximation (LDA) linear augmented plane-wave (LAPW) band structure techniques[3] to first calculate an electronic-structure band structure and total energy for fcc, bcc, and simple cubic crystal structures for the actinide of interest. The tight-binding parameterization is then chosen to fit the detailed energy eigenvalues of the bands along symmetry directions, and the symmetry of the parameterization is constrained to agree with the correct symmetry of the LDA band structure at each eigenvalue and k-vector that is fit to. By fitting to a range of different volumes and the three different crystal structures, we find that the resulting parameterization is robust and appears to accurately calculate other crystal structures and properties of interest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3280848','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3280848"><span>Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.</p> <p>2009-01-01</p> <p>In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890033120&hterms=process+improvement&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dprocess%2Bimprovement','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890033120&hterms=process+improvement&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dprocess%2Bimprovement"><span>The roles of dry convection, cloud-radiation feedback processes and the influence of recent improvements in the parameterization of convection in the GLA GCM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sud, Y.; Molod, A.</p> <p>1988-01-01</p> <p>The Goddard Laboratory for Atmospheres GCM is used to study the sensitivity of the simulated July circulation to modifications in the parameterization of dry and moist convection, evaporation from falling raindrops, and cloud-radiation interaction. It is shown that the Arakawa-Schubert (1974) cumulus parameterization and a more realistic dry convective mixing calculation yielded a better intertropical convergence zone over North Africa than the previous convection scheme. It is found that the physical mechanism for the improvement was the upward mixing of PBL moisture by vigorous dry convective mixing. A modified rain-evaporation parameterization which accounts for raindrop size distribution, the atmospheric relative humidity, and a typical spatial rainfall intensity distribution for convective rain was developed and implemented. This scheme led to major improvements in the monthly mean vertical profiles of relative humidity and temperature, convective and large-scale cloudiness, rainfall distributions, and mean relative humidity in the PBL.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013IJCEM..14..495E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013IJCEM..14..495E"><span>Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elishakoff, I.</p> <p>2013-10-01</p> <p>Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CPL...690..133K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CPL...690..133K"><span>Dissipative particle dynamics parameterization and simulations to predict negative volume excess and structure of PEG and water mixtures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kacar, Gokhan</p> <p>2017-12-01</p> <p>We report the results of dissipative particle dynamics (DPD) parameterization and simulations of a mixture of hydrophilic polymer, PEG 400, and water which are known to exhibit negative volume excess property upon mixing. The addition of a Morse potential to the conventional DPD potential mimics the hydrogen bond attraction, where the parameterization takes the internal chemistry of the beads into account. The results indicate that the mixing of PEG and water are maintained by the influence of hydrogen bonds, and the mesoscopic structure is characterized by the trade-off of enthalpic and entropic effects.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020081118','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020081118"><span>Summary of Cumulus Parameterization Workshop</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh</p> <p>2002-01-01</p> <p>A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=alternative&pg=7&id=EJ1117261','ERIC'); return false;" href="https://eric.ed.gov/?q=alternative&pg=7&id=EJ1117261"><span>Seeking Educational Excellence Everywhere: An Exploration into the Impact of Academisation on Alternative Education Provision in England</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Dean, Charlotte</p> <p>2016-01-01</p> <p>This article presents a policy analysis of the UK Government's Academies programme and explores the impact that this might have on young people who have become disengaged from the mainstream education system and are thus educated in "alternative provision" (AP) settings. It argues that the academisation proposals curtail some of the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=travel+AND+study&pg=7&id=ED545751','ERIC'); return false;" href="https://eric.ed.gov/?q=travel+AND+study&pg=7&id=ED545751"><span>Alternative Break Programs and the Factors that Contribute to Changes in Students' Lives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Niehaus, Elizabeth Kathleen</p> <p>2012-01-01</p> <p>The purpose of this study was to explore the extent to and ways in which student participants in Alternative Break (AB) programs report that their AB experience influenced their intentions or plans to volunteer, engage in advocacy, or study or travel abroad, or their major or career plans. Additional analysis explored the specific program…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/43925','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/43925"><span>Exploring the persistence of stream-dwelling trout populations under alternative real-world turbidity regimes with an individual-based model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Bret C. Harvey; Steven F. Railsback</p> <p>2009-01-01</p> <p>We explored the effects of elevated turbidity on stream-resident populations of coastal cutthroat trout Oncorhynchus clarkii clarkii using a spatially explicit individual-based model. Turbidity regimes were contrasted by means of 15-year simulations in a third-order stream in northwestern California. The alternative regimes were based on multiple-year, continuous...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030064074','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030064074"><span>A Subjective Assessment of Alternative Mission Architecture Operations Concepts for the Human Exploration of Mars at NASA Using a Three-Dimensional Multi-Criteria Decision Making Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tavana, Madjid</p> <p>2003-01-01</p> <p>The primary driver for developing missions to send humans to other planets is to generate significant scientific return. NASA plans human planetary explorations with an acceptable level of risk consistent with other manned operations. Space exploration risks can not be completely eliminated. Therefore, an acceptable level of cost, technical, safety, schedule, and political risks and benefits must be established for exploratory missions. This study uses a three-dimensional multi-criteria decision making model to identify the risks and benefits associated with three alternative mission architecture operations concepts for the human exploration of Mars identified by the Mission Operations Directorate at Johnson Space Center. The three alternatives considered in this study include split, combo lander, and dual scenarios. The model considers the seven phases of the mission including: 1) Earth Vicinity/Departure; 2) Mars Transfer; 3) Mars Arrival; 4) Planetary Surface; 5) Mars Vicinity/Departure; 6) Earth Transfer; and 7) Earth Arrival. Analytic Hierarchy Process (AHP) and subjective probability estimation are used to captures the experts belief concerning the risks and benefits of the three alternative scenarios through a series of sequential, rational, and analytical processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4438322','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4438322"><span>A multitrophic model to quantify the effects of marine viruses on microbial food webs and ecosystem processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Weitz, Joshua S; Stock, Charles A; Wilhelm, Steven W; Bourouiba, Lydia; Coleman, Maureen L; Buchan, Alison; Follows, Michael J; Fuhrman, Jed A; Jover, Luis F; Lennon, Jay T; Middelboe, Mathias; Sonderegger, Derek L; Suttle, Curtis A; Taylor, Bradford P; Frede Thingstad, T; Wilson, William H; Eric Wommack, K</p> <p>2015-01-01</p> <p>Viral lysis of microbial hosts releases organic matter that can then be assimilated by nontargeted microorganisms. Quantitative estimates of virus-mediated recycling of carbon in marine waters, first established in the late 1990s, were originally extrapolated from marine host and virus densities, host carbon content and inferred viral lysis rates. Yet, these estimates did not explicitly incorporate the cascade of complex feedbacks associated with virus-mediated lysis. To evaluate the role of viruses in shaping community structure and ecosystem functioning, we extend dynamic multitrophic ecosystem models to include a virus component, specifically parameterized for processes taking place in the ocean euphotic zone. Crucially, we are able to solve this model analytically, facilitating evaluation of model behavior under many alternative parameterizations. Analyses reveal that the addition of a virus component promotes the emergence of complex communities. In addition, biomass partitioning of the emergent multitrophic community is consistent with well-established empirical norms in the surface oceans. At steady state, ecosystem fluxes can be probed to characterize the effects that viruses have when compared with putative marine surface ecosystems without viruses. The model suggests that ecosystems with viruses will have (1) increased organic matter recycling, (2) reduced transfer to higher trophic levels and (3) increased net primary productivity. These model findings support hypotheses that viruses can have significant stimulatory effects across whole-ecosystem scales. We suggest that existing efforts to predict carbon and nutrient cycling without considering virus effects are likely to miss essential features of marine food webs that regulate global biogeochemical cycles. PMID:25635642</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25635642','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25635642"><span>A multitrophic model to quantify the effects of marine viruses on microbial food webs and ecosystem processes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Weitz, Joshua S; Stock, Charles A; Wilhelm, Steven W; Bourouiba, Lydia; Coleman, Maureen L; Buchan, Alison; Follows, Michael J; Fuhrman, Jed A; Jover, Luis F; Lennon, Jay T; Middelboe, Mathias; Sonderegger, Derek L; Suttle, Curtis A; Taylor, Bradford P; Frede Thingstad, T; Wilson, William H; Eric Wommack, K</p> <p>2015-06-01</p> <p>Viral lysis of microbial hosts releases organic matter that can then be assimilated by nontargeted microorganisms. Quantitative estimates of virus-mediated recycling of carbon in marine waters, first established in the late 1990s, were originally extrapolated from marine host and virus densities, host carbon content and inferred viral lysis rates. Yet, these estimates did not explicitly incorporate the cascade of complex feedbacks associated with virus-mediated lysis. To evaluate the role of viruses in shaping community structure and ecosystem functioning, we extend dynamic multitrophic ecosystem models to include a virus component, specifically parameterized for processes taking place in the ocean euphotic zone. Crucially, we are able to solve this model analytically, facilitating evaluation of model behavior under many alternative parameterizations. Analyses reveal that the addition of a virus component promotes the emergence of complex communities. In addition, biomass partitioning of the emergent multitrophic community is consistent with well-established empirical norms in the surface oceans. At steady state, ecosystem fluxes can be probed to characterize the effects that viruses have when compared with putative marine surface ecosystems without viruses. The model suggests that ecosystems with viruses will have (1) increased organic matter recycling, (2) reduced transfer to higher trophic levels and (3) increased net primary productivity. These model findings support hypotheses that viruses can have significant stimulatory effects across whole-ecosystem scales. We suggest that existing efforts to predict carbon and nutrient cycling without considering virus effects are likely to miss essential features of marine food webs that regulate global biogeochemical cycles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880014200','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880014200"><span>Computer program for parameterization of nucleus-nucleus electromagnetic dissociation cross sections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Norbury, John W.; Townsend, Lawrence W.; Badavi, Forooz F.</p> <p>1988-01-01</p> <p>A computer subroutine parameterization of electromagnetic dissociation cross sections for nucleus-nucleus collisions is presented that is suitable for implementation in a heavy ion transport code. The only inputs required are the projectile kinetic energy and the projectile and target charge and mass numbers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=64973&keyword=atmosphere+AND+wind+AND+profile&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=64973&keyword=atmosphere+AND+wind+AND+profile&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>SENSITIVITY OF THE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION MULTILAYER MODEL TO INSTRUMENT ERROR AND PARAMETERIZATION UNCERTAINTY</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=311125&keyword=Scheme&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=311125&keyword=Scheme&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JPRS...64..414Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JPRS...64..414Z"><span>Parameterization of air temperature in high temporal and spatial resolution from a combination of the SEVIRI and MODIS instruments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zakšek, Klemen; Schroedter-Homscheidt, Marion</p> <p></p> <p>Some applications, e.g. from traffic or energy management, require air temperature data in high spatial and temporal resolution at two metres height above the ground ( T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (SEVIRI data aboard the MSG and MODIS data aboard Terra and Aqua satellites). The method consists of two parts. First, a downscaling procedure from the SEVIRI pixel resolution of several kilometres to a one kilometre spatial resolution is performed using a regression analysis between the land surface temperature ( LST) and the normalized differential vegetation index ( NDVI) acquired by the MODIS instrument. Second, the lapse rate between the LST and T2m is removed using an empirical parameterization that requires albedo, down-welling surface short-wave flux, relief characteristics and NDVI data. The method was successfully tested for Slovenia, the French region Franche-Comté and southern Germany for the period from May to December 2005, indicating that the parameterization is valid for Central Europe. This parameterization results in a root mean square deviation RMSD of 2.0 K during the daytime with a bias of -0.01 K and a correlation coefficient of 0.95. This is promising, especially considering the high temporal (30 min) and spatial resolution (1000 m) of the results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMNG43A3750H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMNG43A3750H"><span>Inducing Tropical Cyclones to Undergo Brownian Motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hodyss, D.; McLay, J.; Moskaitis, J.; Serra, E.</p> <p>2014-12-01</p> <p>Stochastic parameterization has become commonplace in numerical weather prediction (NWP) models used for probabilistic prediction. Here, a specific stochastic parameterization will be related to the theory of stochastic differential equations and shown to be affected strongly by the choice of stochastic calculus. From an NWP perspective our focus will be on ameliorating a common trait of the ensemble distributions of tropical cyclone (TC) tracks (or position), namely that they generally contain a bias and an underestimate of the variance. With this trait in mind we present a stochastic track variance inflation parameterization. This parameterization makes use of a properly constructed stochastic advection term that follows a TC and induces its position to undergo Brownian motion. A central characteristic of Brownian motion is that its variance increases with time, which allows for an effective inflation of an ensemble's TC track variance. Using this stochastic parameterization we present a comparison of the behavior of TCs from the perspective of the stochastic calculi of Itô and Stratonovich within an operational NWP model. The central difference between these two perspectives as pertains to TCs is shown to be properly predicted by the stochastic calculus and the Itô correction. In the cases presented here these differences will manifest as overly intense TCs, which, depending on the strength of the forcing, could lead to problems with numerical stability and physical realism.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140016913','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140016913"><span>The QBO in Two GISS Global Climate Models: 1. Generation of the QBO</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rind, David; Jonas, Jeffrey A.; Balachandra, Nambath; Schmidt, Gavin A.; Lean, Judith</p> <p>2014-01-01</p> <p>The adjustment of parameterized gravity waves associated with model convection and finer vertical resolution has made possible the generation of the quasi-biennial oscillation (QBO) in two Goddard Institute for Space Studies (GISS) models, GISS Middle Atmosphere Global Climate Model III and a climate/middle atmosphere version of Model E2. Both extend from the surface to 0.002 hPa, with 2deg × 2.5deg resolution and 102 layers. Many realistic features of the QBO are simulated, including magnitude and variability of its period and amplitude. The period itself is affected by the magnitude of parameterized convective gravity wave momentum fluxes and interactive ozone (which also affects the QBO amplitude and variability), among other forcings. Although varying sea surface temperatures affect the parameterized momentum fluxes, neither aspect is responsible for the modeled variation in QBO period. Both the parameterized and resolved waves act to produce the respective easterly and westerly wind descent, although their effect is offset in altitude at each level. The modeled and observed QBO influences on tracers in the stratosphere, such as ozone, methane, and water vapor are also discussed. Due to the link between the gravity wave parameterization and the models' convection, and the dependence on the ozone field, the models may also be used to investigate how the QBO may vary with climate change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JGRC..120.3157L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JGRC..120.3157L"><span>A basal stress parameterization for modeling landfast ice</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lemieux, Jean-François; Tremblay, L. Bruno; Dupont, Frédéric; Plante, Mathieu; Smith, Gregory C.; Dumont, Dany</p> <p>2015-04-01</p> <p>Current large-scale sea ice models represent very crudely or are unable to simulate the formation, maintenance and decay of coastal landfast ice. We present a simple landfast ice parameterization representing the effect of grounded ice keels. This parameterization is based on bathymetry data and the mean ice thickness in a grid cell. It is easy to implement and can be used for two-thickness and multithickness category models. Two free parameters are used to determine the critical thickness required for large ice keels to reach the bottom and to calculate the basal stress associated with the weight of the ridge above hydrostatic balance. A sensitivity study was conducted and demonstrates that the parameter associated with the critical thickness has the largest influence on the simulated landfast ice area. A 6 year (2001-2007) simulation with a 20 km resolution sea ice model was performed. The simulated landfast ice areas for regions off the coast of Siberia and for the Beaufort Sea were calculated and compared with data from the National Ice Center. With optimal parameters, the basal stress parameterization leads to a slightly shorter landfast ice season but overall provides a realistic seasonal cycle of the landfast ice area in the East Siberian, Laptev and Beaufort Seas. However, in the Kara Sea, where ice arches between islands are key to the stability of the landfast ice, the parameterization consistently leads to an underestimation of the landfast area.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=308796','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=308796"><span>Impact of APEX parameterization and soil data on runoff, sediment, and nutrients transport assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=304399','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=304399"><span>Parameterization guidelines and considerations for hydrologic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=algorithm&pg=6&id=EJ796571','ERIC'); return false;" href="https://eric.ed.gov/?q=algorithm&pg=6&id=EJ796571"><span>Newton Algorithms for Analytic Rotation: An Implicit Function Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Boik, Robert J.</p> <p>2008-01-01</p> <p>In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=335586','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=335586"><span>Parameterization of norfolk sandy loam properties for stochastic modeling of light in-wheel motor UGV</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12403449','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12403449"><span>Parameterization of daily solar global ultraviolet irradiation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feister, U; Jäkel, E; Gericke, K</p> <p>2002-09-01</p> <p>Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt. Pinatubo in 1991 have substantially enhanced UVB irradiation in the first half of the 1990s. According to the measurements and calculations, the nonlinear long-term changes observed between 1968 and 2000 amount to +4%, ..., +5% for annual global irradiation and UVA irradiation mainly because of changing cloudiness and + 14%, ..., +15% for UVB and erythemal irradiation because of both changing cloudiness and decreasing column ozone. At the mountain site, Hohenpeissenberg, measured global irradiation and parameterized UVA irradiation decreased during the same time period by -3%, ..., -4%, probably because of the enhanced occurrence and increasing optical thickness of clouds, whereas UVB and erythemal irradiation derived by the parameterization have increased by +3%, ..., +4% because of the combined effect of clouds and decreasing ozone. The parameterizations described here should be applicable to other regions with similar atmospheric and geographic conditions, whereas for regions with significantly different climatic conditions, such as high mountainous areas and arctic or tropical regions, the representativeness of the regression coefficients would have to be approved. It is emphasized here that parameterizations, as the one described in this article, cannot replace measurements of solar UV radiation, but they can use existing measurements of solar global and diffuse radiation as well as data on atmospheric ozone to provide estimates of UV irradiation in regions and over time periods for which UV measurements are not available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=%2509Chen+AND+J.+AND+P.+AND+&pg=2&id=EJ1154588','ERIC'); return false;" href="https://eric.ed.gov/?q=%2509Chen+AND+J.+AND+P.+AND+&pg=2&id=EJ1154588"><span>Acquisition of Nominal Morphophonological Alternations in Russian</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Tomas, Ekaterina; van de Vijver, Ruben; Demuth, Katherine; Petocz, Peter</p> <p>2017-01-01</p> <p>Morphophonological alternations can make target-like production of grammatical morphemes challenging due to changes in form depending on the phonological environment. This article explores the acquisition of morphophonological alternations involving the interacting patterns of vowel deletion and stress shift in Russian-speaking children (aged…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=evidence+AND+theory+AND+evolution&pg=2&id=EJ972433','ERIC'); return false;" href="https://eric.ed.gov/?q=evidence+AND+theory+AND+evolution&pg=2&id=EJ972433"><span>Short Lesson Plan Associated with Increased Acceptance of Evolutionary Theory and Potential Change in Three Alternate Conceptions of Macroevolution in Undergraduate Students</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Abraham, Joel K.; Perez, Kathryn E.; Downey, Nicholas; Herron, Jon C.; Meir, Eli</p> <p>2012-01-01</p> <p>Undergraduates commonly harbor alternate conceptions about evolutionary biology; these alternate conceptions often persist, even after intensive instruction, and may influence acceptance of evolution. We interviewed undergraduates to explore their alternate conceptions about macroevolutionary patterns and designed a 2-h lesson plan to present…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060013457','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060013457"><span>Multiscale Modeling of Grain-Boundary Fracture: Cohesive Zone Models Parameterized From Atomistic Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin</p> <p>2006-01-01</p> <p>A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24729986','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24729986"><span>Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot</p> <p>2013-10-01</p> <p>Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IJQC..110..755S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IJQC..110..755S"><span>An improvement of quantum parametric methods by using SGSA parameterization technique and new elementary parametric functionals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.</p> <p></p> <p>A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970024836','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970024836"><span>Comparison of Ice Cloud Particle Sizes Retrieved from Satellite Data Derived from In Situ Measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Han, Qingyuan; Rossow, William B.; Chou, Joyce; Welch, Ronald M.</p> <p>1997-01-01</p> <p>Cloud microphysical parameterizations have attracted a great deal of attention in recent years due to their effect on cloud radiative properties and cloud-related hydrological processes in large-scale models. The parameterization of cirrus particle size has been demonstrated as an indispensable component in the climate feedback analysis. Therefore, global-scale, long-term observations of cirrus particle sizes are required both as a basis of and as a validation of parameterizations for climate models. While there is a global scale, long-term survey of water cloud droplet sizes (Han et al.), there is no comparable study for cirrus ice crystals. This study is an effort to supply such a data set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010CPL...487...71H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010CPL...487...71H"><span>Paramaterization of a coarse-grained model for linear alkylbenzene sulfonate surfactants and molecular dynamics studies of their self-assembly in aqueous solution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>He, Xibing; Shinoda, Wataru; DeVane, Russell; Anderson, Kelly L.; Klein, Michael L.</p> <p>2010-02-01</p> <p>A coarse-grained (CG) forcefield for linear alkylbenzene sulfonates (LAS) was systematically parameterized. Thermodynamic data from experiments and structural data obtained from all-atom molecular dynamics were used as targets to parameterize CG potentials for the bonded and non-bonded interactions. The added computational efficiency permits one to employ computer simulation to probe the self-assembly of LAS aqueous solutions into different morphologies starting from a random configuration. The present CG model is shown to accurately reproduce the phase behavior of solutions of pure isomers of sodium dodecylbenzene sulfonate, despite the fact that phase behavior was not directly taken into account in the forcefield parameterization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/23092','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/23092"><span>Alternative alignments development and evaluation for the US 220 project in Maryland.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2011-06-01</p> <p>This project aims to find the preferred alternative alignments for the Maryland section of existing US 220, using the highway : alignment optimization (HAO) model. The model was used to explore alternative alignments within a 4,000 foot-wide buffer o...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040087251','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040087251"><span>Temporal Large-Eddy Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pruett, C. D.; Thomas, B. C.</p> <p>2004-01-01</p> <p>In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3025173','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3025173"><span>Investigation of Maternal Effects, Maternal-Fetal Interactions and Parent-of-Origin Effects (Imprinting), Using Mothers and Their Offspring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ainsworth, Holly F; Unwin, Jennifer; Jamison, Deborah L; Cordell, Heather J</p> <p>2011-01-01</p> <p>Many complex genetic effects, including epigenetic effects, may be expected to operate via mechanisms in the inter-uterine environment. A popular design for the investigation of such effects, including effects of parent-of-origin (imprinting), maternal genotype, and maternal-fetal genotype interactions, is to collect DNA from affected offspring and their mothers (case/mother duos) and to compare with an appropriate control sample. An alternative design uses data from cases and both parents (case/parent trios) but does not require controls. In this study, we describe a novel implementation of a multinomial modeling approach that allows the estimation of such genetic effects using either case/mother duos or case/parent trios. We investigate the performance of our approach using computer simulations and explore the sample sizes and data structures required to provide high power for detection of effects and accurate estimation of the relative risks conferred. Through the incorporation of additional assumptions (such as Hardy-Weinberg equilibrium, random mating and known allele frequencies) and/or the incorporation of additional types of control sample (such as unrelated controls, controls and their mothers, or both parents of controls), we show that the (relative risk) parameters of interest are identifiable and well estimated. Nevertheless, parameter interpretation can be complex, as we illustrate by demonstrating the mathematical equivalence between various different parameterizations. Our approach scales up easily to allow the analysis of large-scale genome-wide association data, provided both mothers and affected offspring have been genotyped at all variants of interest. Genet. Epidemiol. 35:19–45, 2011. © 2010 Wiley-Liss, Inc. PMID:21181895</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhDT........85R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhDT........85R"><span>Properties and biomedical applications of magnetic nanoparticles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Regmi, Rajesh Kumar</p> <p></p> <p>Magnetic nanoparticles have a number of unique properties, making them promising agents for applications in medicine including magnetically targeted drug delivery, magnetic hyperthermia, magnetic resonance imaging, and radiation therapy. They are biocompatible and can also be coated with biocompatible surfactants, which may be further functionalized with optically and therapeutically active molecules. These nanoparticles can be manipulated with non-invasive external magnetic field to produce heat, target specific site, and monitor their distribution in vivo. Within this framework, we have investigated a number of biomedical applications of these nanoparticles. We synthesized a thermosensitive microgel with iron oxide adsorbed on its surface. An alternating magnetic field applied to these nanocomposites heated the system and triggered the release of an anticancer drug mitoxantrone. We also parameterized the chain length dependence of drug release from dextran coated iron oxide nanoparticles, finding that both the release rate and equilibrium release fraction depend on the molecular mass of the surfactant. Finally, we also localized dextran coated iron oxide nanoparticles labeled with tat peptide to the cell nucleus, which permits this system to be used for a variety of biomedical applications. Beyond investigating magnetic nanoparticles for biomedical applications, we also studied their magnetohydrodynamic and dielectric properties in solution. Magnetohydrodynamic properties of ferrofluid can be controlled by appropriate selection of surfactant and deielctric measurement showed magnetodielectric coupling in this system. We also established that some complex low temperature spin structures are suppressed in Mn3O4 nanoparticles, which has important implications for nanomagnetic devices. Furthermore, we explored exchange bias effects in Ni-NiO core-shell nanoparticles. Finally, we also performed extensive magnetic studies in nickel metalhydride (NiMH) batteries to determine the size of Ni clusters, which plays important role on catalyzing the electrochemical reaction and powering Ni-MH batteries.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JMS....81..196M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JMS....81..196M"><span>Climate variability and change scenarios for a marine commodity: Modelling small pelagic fish, fisheries and fishmeal in a globalized market</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Merino, Gorka; Barange, Manuel; Mullon, Christian</p> <p>2010-04-01</p> <p>The world's small pelagic fish populations, their fisheries, fishmeal and fish oil production industries and markets are part of a globalised production and consumption system. The potential for climate variability and change to alter the balance in this system is explored by means of bioeconomic models at two different temporal scales, with the objective of investigating the interactive nature of environmental and human-induced changes on this globalised system. Short-term (interannual) environmental impacts on fishmeal production are considered by including an annual variable production rate on individual small pelagic fish stocks over a 10-year simulation period. These impacts on the resources are perceived by the fishmeal markets, where they are confronted by two aquaculture expansion hypotheses. Long-term (2080) environmental impacts on the same stocks are estimated using long-term primary production predictions as proxies for the species' carrying capacities, rather than using variable production rates, and are confronted on the market side by two alternative fishmeal management scenarios consistent with IPCC-type storylines. The two scenarios, World Markets and Global Commons, are parameterized through classic equilibrium solutions for a global surplus production bioeconomic model, namely maximum sustainable yield and open access, respectively. The fisheries explicitly modelled in this paper represent 70% of total fishmeal production, thus encapsulating the expected dynamics of the global production and consumption system. Both short and long-term simulations suggest that the sustainability of the small pelagic resources, in the face of climate variability and change, depends more on how society responds to climate impacts than on the magnitude of climate alterations per se.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/53853','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/53853"><span>Changes in physiological attributes of ponderosa pine from seedling to mature tree</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Nancy E. Grulke; William A. Retzlaff</p> <p>2001-01-01</p> <p>Plant physiological models are generally parameterized from many different sources of data, including chamber experiments and plantations, from seedlings to mature trees. We obtained a comprehensive data set for a natural stand of ponderosa pine (Pinus ponderosa Laws.) and used these data to parameterize the physiologically based model, TREGRO....</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA458924','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA458924"><span>High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2006-08-01</p> <p>snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=186223&Lab=NERL&keyword=health+AND+physics&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=186223&Lab=NERL&keyword=health+AND+physics&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Parameterization of N2O5 Reaction Probabilities on the Surface of Particles Containing Ammonium, Sulfate, and Nitrate</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>A comprehensive parameterization was developed for the heterogeneous reaction probability (γ) of N<SUB>2</SUB>O<SUB>5</SUB> as a function of temperature, relative humidity, particle composition, and phase state, for use in advanced air quality models. The reaction probabilities o...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPSC...11..646N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPSC...11..646N"><span>Merged data models for multi-parameterized querying: Spectral data base meets GIS-based map archive</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Naß, A.; D'Amore, M.; Helbert, J.</p> <p>2017-09-01</p> <p>Current and upcoming planetary missions deliver a huge amount of different data (remote sensing data, in-situ data, and derived products). Within this contribution present how different data (bases) can be managed and merged, to enable multi-parameterized querying based on the constant spatial context.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA601275','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA601275"><span>Develop and Test Coupled Physical Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-09-30</p> <p>Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM W. Erick Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529...Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=brain+AND+decreased+AND+time&pg=5&id=EJ729844','ERIC'); return false;" href="https://eric.ed.gov/?q=brain+AND+decreased+AND+time&pg=5&id=EJ729844"><span>Parameterization of Movement Execution in Children with Developmental Coordination Disorder</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Van Waelvelde, Hilde; De Weerdt, Willy; De Cock, Paul; Janssens, Luc; Feys, Hilde; Engelsman, Bouwien C. M. Smits</p> <p>2006-01-01</p> <p>The Rhythmic Movement Test (RMT) evaluates temporal and amplitude parameterization and fluency of movement execution in a series of rhythmic arm movements under different sensory conditions. The RMT was used in combination with a jumping and a drawing task, to evaluate 36 children with Developmental Coordination Disorder (DCD) and a matched…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=321816','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=321816"><span>Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Guidelines have been developed to parameterize and validate the Agricultural Policy Environmental eXtender (APEX) to support the Nutrient Tracking Tool (NTT). This follow-up paper presents 1) a case study to illustrate how the developed guidelines are applied in a headwater watershed located in cent...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008ERL.....3.5001S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008ERL.....3.5001S"><span>Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.</p> <p>2008-10-01</p> <p>A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010072848','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010072848"><span>A Thermal Infrared Radiation Parameterization for Atmospheric Studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)</p> <p>2001-01-01</p> <p>This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>