How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Katsaros, Kristina B.
1994-01-01
Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...
2016-10-20
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Po-Lun; Rasch, Philip J.; Fast, Jerome D.
A suite of physical parameterizations (deep and shallow convection, turbulent boundary layer, aerosols, cloud microphysics, and cloud fraction) from the global climate model Community Atmosphere Model version 5.1 (CAM5) has been implemented in the regional model Weather Research and Forecasting with chemistry (WRF-Chem). A downscaling modeling framework with consistent physics has also been established in which both global and regional simulations use the same emissions and surface fluxes. The WRF-Chem model with the CAM5 physics suite is run at multiple horizontal resolutions over a domain encompassing the northern Pacific Ocean, northeast Asia, and northwest North America for April 2008 whenmore » the ARCTAS, ARCPAC, and ISDAC field campaigns took place. These simulations are evaluated against field campaign measurements, satellite retrievals, and ground-based observations, and are compared with simulations that use a set of common WRF-Chem Parameterizations. This manuscript describes the implementation of the CAM5 physics suite in WRF-Chem provides an overview of the modeling framework and an initial evaluation of the simulated meteorology, clouds, and aerosols, and quantifies the resolution dependence of the cloud and aerosol parameterizations. We demonstrate that some of the CAM5 biases, such as high estimates of cloud susceptibility to aerosols and the underestimation of aerosol concentrations in the Arctic, can be reduced simply by increasing horizontal resolution. We also show that the CAM5 physics suite performs similarly to a set of parameterizations commonly used in WRF-Chem, but produces higher ice and liquid water condensate amounts and near-surface black carbon concentration. Further evaluations that use other mesoscale model parameterizations and perform other case studies are needed to infer whether one parameterization consistently produces results more consistent with observations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Melkamu; Ye, Sheng; Li, Hongyi
2014-07-19
Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less
a Physical Parameterization of Snow Albedo for Use in Climate Models.
NASA Astrophysics Data System (ADS)
Marshall, Susan Elaine
The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.
NASA Astrophysics Data System (ADS)
Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.
2017-12-01
We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.
Straddling Interdisciplinary Seams: Working Safely in the Field, Living Dangerously With a Model
NASA Astrophysics Data System (ADS)
Light, B.; Roberts, A.
2016-12-01
Many excellent proposals for observational work have included language detailing how the proposers will appropriately archive their data and publish their results in peer-reviewed literature so that they may be readily available to the modeling community for parameterization development. While such division of labor may be both practical and inevitable, the assimilation of observational results and the development of observationally-based parameterizations of physical processes require care and feeding. Key questions include: (1) Is an existing parameterization accurate, consistent, and general? If not, it may be ripe for additional physics. (2) Do there exist functional working relationships between human modeler and human observationalist? If not, one or more may need to be initiated and cultivated. (3) If empirical observation and model development are a chicken/egg problem, how, given our lack of prescience and foreknowledge, can we better design observational science plans to meet the eventual demands of model parameterization? (4) Will the addition of new physics "break" the model? If so, then the addition may be imperative. In the context of these questions, we will make retrospective and forward-looking assessments of a now-decade-old numerical parameterization to treat the partitioning of solar energy at the Earth's surface where sea ice is present. While this so called "Delta-Eddington Albedo Parameterization" is currently employed in the widely-used Los Alamos Sea Ice Model (CICE) and appears to be standing the tests of accuracy, consistency, and generality, we will highlight some ideas for its ongoing development and improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
Development of Turbulent Biological Closure Parameterizations
2011-09-30
LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...
Physics-based distributed snow models in the operational arena: Current and future challenges
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.
2017-12-01
The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
NASA Astrophysics Data System (ADS)
Lamraoui, F.; Booth, J. F.; Naud, C. M.
2017-12-01
The representation of subgrid-scale processes of low-level marine clouds located in the post-cold-frontal region poses a serious challenge for climate models. More precisely, the boundary layer parameterizations are predominantly designed for individual regimes that can evolve gradually over time and does not accommodate the cold front passage that can overly modify the boundary layer rapidly. Also, the microphysics schemes respond differently to the quick development of the boundary layer schemes, especially under unstable conditions. To improve the understanding of cloud physics in the post-cold frontal region, the present study focuses on exploring the relationship between cloud properties, the local processes and large-scale conditions. In order to address these questions, we explore the WRF sensitivity to the interaction between various combinations of the boundary layer and microphysics parameterizations, including the Community Atmospheric Model version 5 (CAM5) physical package in a perturbed physics ensemble. Then, we evaluate these simulations against ground-based ARM observations over the Azores. The WRF-based simulations demonstrate particular sensitivities of the marine cold front passage and the associated post-cold frontal clouds to the domain size, the resolution and the physical parameterizations. First, it is found that in multiple different case studies the model cannot generate the cold front passage when the domain size is larger than 3000 km2. Instead, the modeled cold front stalls, which shows the importance of properly capturing the synoptic scale conditions. The simulation reveals persistent delay in capturing the cold front passage and also an underestimated duration of the post-cold-frontal conditions. Analysis of the perturbed physics ensemble shows that changing the microphysics scheme leads to larger differences in the modeled clouds than changing the boundary layer scheme. The in-cloud heating tendencies are analyzed to explain this sensitivity.
Are Atmospheric Updrafts a Key to Unlocking Climate Forcing and Sensitivity?
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...
2016-06-08
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
NASA Astrophysics Data System (ADS)
Langeveld, Willem G. J.
The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner
2011-01-01
Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...
Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts
NASA Astrophysics Data System (ADS)
Wang, M.; Kamarianakis, Y.; Georgescu, M.
2017-12-01
A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.
NASA Astrophysics Data System (ADS)
Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.
2016-11-01
Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
2002-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
Sensitivity analysis with the regional climate model COSMO-CLM over the CORDEX-MENA domain
NASA Astrophysics Data System (ADS)
Bucchignani, E.; Cattaneo, L.; Panitz, H.-J.; Mercogliano, P.
2016-02-01
The results of a sensitivity work based on ERA-Interim driven COSMO-CLM simulations over the Middle East-North Africa (CORDEX-MENA) domain are presented. All simulations were performed at 0.44° spatial resolution. The purpose of this study was to ascertain model performances with respect to changes in physical and tuning parameters which are mainly related to surface, convection, radiation and cloud parameterizations. Evaluation was performed for the whole CORDEX-MENA region and six sub-regions, comparing a set of 26 COSMO-CLM runs against a combination of available ground observations, satellite products and reanalysis data to assess temperature, precipitation, cloud cover and mean sea level pressure. The model proved to be very sensitive to changes in physical parameters. The optimized configuration allows COSMO-CLM to improve the simulated main climate features of this area. Its main characteristics consist in the new parameterization of albedo, based on Moderate Resolution Imaging Spectroradiometer data, and the new parameterization of aerosol, based on NASA-GISS AOD distributions. When applying this configuration, Mean Absolute Error values for the considered variables are as follows: about 1.2 °C for temperature, about 15 mm/month for precipitation, about 9 % for total cloud cover, and about 0.6 hPa for mean sea level pressure.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
NASA Astrophysics Data System (ADS)
Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia
2018-06-01
Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.
Prototype Mcs Parameterization for Global Climate Models
NASA Astrophysics Data System (ADS)
Moncrieff, M. W.
2017-12-01
Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.
NASA Technical Reports Server (NTRS)
Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain
2012-01-01
Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
Physically-based modeling of drag force caused by natural woody vegetation
NASA Astrophysics Data System (ADS)
Järvelä, J.; Aberle, J.
2014-12-01
Riparian areas and floodplains are characterized by woody vegetation, which is an essential feature to be accounted for in many hydro-environmental models. For applications including flood protection, river restoration and modelling of sediment processes, there is a need to improve the reliability of flow resistance estimates. Conventional methods such as the use of lumped resistance coefficients or simplistic cylinder-based drag force equations can result in significant errors, as these methods do not adequately address the effect of foliage and reconfiguration of flexible plant parts under flow action. To tackle the problem, physically-based methods relying on objective and measurable vegetation properties are advantageous for describing complex vegetation. We have conducted flume and towing tank investigations with living and artificial plants, both in arrays and with isolated plants, providing new insight into advanced parameterization of natural vegetation. The stem, leaf and total areas of the trees confirmed to be suitable characteristic dimensions for estimating flow resistance. Consequently, we propose the use of leaf area index and leaf-to-stem-area ratio to achieve better drag force estimates. Novel remote sensing techniques including laser scanning have become available for effective collection of the required data. The benefits of the proposed parameterization have been clearly demonstrated in our newest experimental studies, but it remains to be investigated to what extent the parameter values are species-specific and how they depend on local habitat conditions. The purpose of this contribution is to summarize developments in the estimation of vegetative drag force based on physically-based approaches as the latest research results are somewhat dispersed. In particular, concerning woody vegetation we seek to discuss three issues: 1) parameterization of reconfiguration with the Vogel exponent; 2) advantage of parameterizing plants with the leaf area index and leaf-to-stem-area ratio, and 3) effect of plant scale (size from twigs to mature trees). To analyze these issues we use experimental data from the authors' research teams as well as from other researchers. The results are expected to be useful for the design of future experimental campaigns and developing drag force models.
NASA Astrophysics Data System (ADS)
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations
NASA Technical Reports Server (NTRS)
DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.
2013-01-01
Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.
Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus
2008-08-05
A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, M. S.; Keene, William C.; Zhang, J.
2016-11-08
Primary marine aerosol (PMA) is emitted into the atmosphere via breaking wind waves on the ocean surface. Most parameterizations of PMA emissions use 10-meter wind speed as a proxy for wave action. This investigation coupled the 3 rd generation prognostic WAVEWATCH-III wind-wave model within a coupled Earth system model (ESM) to drive PMA production using wave energy dissipation rate – analogous to whitecapping – in place of 10-meter wind speed. The wind speed parameterization did not capture basin-scale variability in relations between wind and wave fields. Overall, the wave parameterization did not improve comparison between simulated versus measured AOD ormore » Na +, thus highlighting large remaining uncertainties in model physics. Results confirm the efficacy of prognostic wind-wave models for air-sea exchange studies coupled with laboratory- and field-based characterizations of the primary physical drivers of PMA production. No discernible correlations were evident between simulated PMA fields and observed chlorophyll or sea surface temperature.« less
NASA Astrophysics Data System (ADS)
Xie, Xin
Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.
Monitoring Marine Weather Systems Using Quikscat and TRMM Data
NASA Technical Reports Server (NTRS)
Liu, W.; Tang, W.; Datta, A.; Hsu, C.
1999-01-01
We do not understand nor are able to predict marine storms, particularly tropical cyclones, sufficiently well because ground-based measurements are sparse and operational numerical weather prediction models do not have sufficient spatial resolution nor accurate parameterization of the physics.
Principal axes estimation using the vibration modes of physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2008-06-01
This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.
2017-08-01
The process of dispersion in porous media is the effect of combined variability in fluid velocity and concentration at scales smaller than the ones resolved that contributes to spreading and mixing. It is usually introduced in textbooks and taught in classes through the Fick-Scheidegger parameterization, which is introduced as a scientific law of universal validity. This parameterization is based on observations in bench-scale laboratory experiments using homogeneous media. Fickian means that dispersive flux is proportional to the gradient of the resolved concentration while the Scheidegger parameterization is a particular way to compute the dispersion coefficients. The unresolved scales are thus associated with the pore-grain geometry that is ignored when the composite pore-grain medium is replaced by a homogeneous continuum. However, the challenge faced in practice is how to account for dispersion in numerical models that discretize the domain into blocks, often cubic meters in size, that contain multiple geologic facies. Although the Fick-Scheidegger parameterization is by far the one most commonly used, its validity has been questioned. This work presents a method of teaching dispersion that emphasizes the physical basis of dispersion and highlights the conditions under which a Fickian dispersion model is justified. In particular, we show that Fickian dispersion has a solid physical basis provided that an equilibrium condition is met. The issue of the Scheidegger parameterization is more complex but it is shown that the approximation that the dispersion coefficients should scale linearly with the mean velocity is often reasonable, at least as a practical approximation, but may not necessarily be always appropriate. Generally in Hydrogeology, the Scheidegger feature of constant dispersivity is considered as a physical law and inseparable from the Fickian model, but both perceptions are wrong. We also explain why Fickian dispersion fails under certain conditions, such as dispersion inside and directly upstream of a contaminant source. Other issues discussed are the relevance of column tests and confusion regarding the meaning of terms dispersion and Fickian.
A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.-F.; Ardhuin, F.
2012-11-01
A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.
Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution
NASA Astrophysics Data System (ADS)
Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike
2011-04-01
Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Integrating Cloud Processes in the Community Atmosphere Model, Version 5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S.; Bretherton, Christopher S.; Rasch, Philip J.
2014-09-15
This paper provides a description on the parameterizations of global cloud system in CAM5. Compared to the previous versions, CAM5 cloud parameterization has the following unique characteristics: (1) a transparent cloud macrophysical structure that has horizontally non-overlapped deep cumulus, shallow cumulus and stratus in each grid layer, each of which has own cloud fraction, mass and number concentrations of cloud liquid droplets and ice crystals, (2) stratus-radiation-turbulence interaction that allows CAM5 to simulate marine stratocumulus solely from grid-mean RH without relying on the stability-based empirical empty stratus, (3) prognostic treatment of the number concentrations of stratus liquid droplets and icemore » crystals with activated aerosols and detrained in-cumulus condensates as the main sources and evaporation-sedimentation-precipitation of stratus condensate as the main sinks, and (4) radiatively active cumulus. By imposing consistency between diagnosed stratus fraction and prognosed stratus condensate, CAM5 is free from empty or highly-dense stratus at the end of stratus macrophysics. CAM5 also prognoses mass and number concentrations of various aerosol species. Thanks to the aerosol activation and the parameterizations of the radiation and stratiform precipitation production as a function of the droplet size, CAM5 simulates various aerosol indirect effects associated with stratus as well as direct effects, i.e., aerosol controls both the radiative and hydrological budgets. Detailed analysis of various simulations revealed that CAM5 is much better than CAM3/4 in the global performance as well as the physical formulation. However, several problems were also identifed, which can be attributed to inappropriate regional tuning, inconsistency between various physics parameterizations, and incomplete model physics. Continuous efforts are going on to further improve CAM5.« less
NASA Astrophysics Data System (ADS)
Oh, D.; Noh, Y.; Hoffmann, F.; Raasch, S.
2017-12-01
Lagrangian cloud model (LCM) is a fundamentally new approach of cloud simulation, in which the flow field is simulated by large eddy simulation and droplets are treated as Lagrangian particles undergoing cloud microphysics. LCM enables us to investigate raindrop formation and examine the parameterization of cloud microphysics directly by tracking the history of individual Lagrangian droplets simulated by LCM. Analysis of the magnitude of raindrop formation and the background physical conditions at the moment at which every Lagrangian droplet grows from cloud droplets to raindrops in a shallow cumulus cloud reveals how and under which condition raindrops are formed. It also provides information how autoconversion and accretion appear and evolve within a cloud, and how they are affected by various factors such as cloud water mixing ratio, rain water mixing ratio, aerosol concentration, drop size distribution, and dissipation rate. Based on these results, the parameterizations of autoconversion and accretion, such as Kessler (1969), Tripoli and Cotton (1980), Beheng (1994), and Kharioutdonov and Kogan (2000), are examined, and the modifications to improve the parameterizations are proposed.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
A review of recent research on improvement of physical parameterizations in the GLA GCM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thatcher, Diana R.; Jablonowski, Christiane
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
Thatcher, Diana R.; Jablonowski, Christiane
2016-04-04
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
The aquatic ecosystem simulation model AQUATOX was parameterized and applied to Contentnea Creek in the coastal plain of North Carolina to determine the response of fish to moderate levels of physical and chemical habitat alterations. Biomass of four fish groups was most sensiti...
10 Ways to Improve the Representation of MCSs in Climate Models
NASA Astrophysics Data System (ADS)
Schumacher, C.
2017-12-01
1. The first way to improve the representation of mesoscale convective systems (MCSs) in global climate models (GCMs) is to recognize that MCSs are important to climate. That may be obvious to most of the people attending this session, but it cannot be taken for granted in the wider community. The fact that MCSs produce large amounts of the global rainfall and that they dramatically impact the atmosphere via transports of heat, moisture, and momentum must be continuously stressed. 2-4. There has traditionally been three approaches to representing MCSs and/or their impacts in GCMs. The first is to focus on improving cumulus parameterizations by implementing things like cold pools that are assumed to better organize convection. The second is to focus on including mesoscale processes in the cumulus parameterization such as mesoscale vertical motions. The third is to just buy your way out with higher resolution using techniques like super-parameterization or global cloud-resolving model runs. All of these approaches have their pros and cons, but none of them satisfactorily solve the MCS climate modeling problem. 5-10. Looking forward, there is active discussion and new ideas in the modeling community on how to better represent convective organization in models. A number of ideas are a dramatic shift from the traditional plume-based cumulus parameterizations of most GCMs, such as implementing mesoscale parmaterizations based on their physical impacts (e.g., via heating), on empirical relationships based on big data/machine learning, or on stochastic approaches. Regardless of the technique employed, smart evaluation processes using observations are paramount to refining and constraining the inevitable tunable parameters in any parameterization.
A physically constrained classical description of the homogeneous nucleation of ice in water.
Koop, Thomas; Murray, Benjamin J
2016-12-07
Liquid water can persist in a supercooled state to below 238 K in the Earth's atmosphere, a temperature range where homogeneous nucleation becomes increasingly probable. However, the rate of homogeneous ice nucleation in supercooled water is poorly constrained, in part, because supercooled water eludes experimental scrutiny in the region of the homogeneous nucleation regime where it can exist only fleetingly. Here we present a new parameterization of the rate of homogeneous ice nucleation based on classical nucleation theory. In our approach, we constrain the key terms in classical theory, i.e., the diffusion activation energy and the ice-liquid interfacial energy, with physically consistent parameterizations of the pertinent quantities. The diffusion activation energy is related to the translational self-diffusion coefficient of water for which we assess a range of descriptions and conclude that the most physically consistent fit is provided by a power law. The other key term is the interfacial energy between the ice embryo and supercooled water whose temperature dependence we constrain using the Turnbull correlation, which relates the interfacial energy to the difference in enthalpy between the solid and liquid phases. The only adjustable parameter in our model is the absolute value of the interfacial energy at one reference temperature. That value is determined by fitting this classical model to a selection of laboratory homogeneous ice nucleation data sets between 233.6 K and 238.5 K. On extrapolation to temperatures below 233 K, into a range not accessible to standard techniques, we predict that the homogeneous nucleation rate peaks between about 227 and 231 K at a maximum nucleation rate many orders of magnitude lower than previous parameterizations suggest. This extrapolation to temperatures below 233 K is consistent with the most recent measurement of the ice nucleation rate in micrometer-sized droplets at temperatures of 227-232 K on very short time scales using an X-ray laser technique. In summary, we present a new physically constrained parameterization for homogeneous ice nucleation which is consistent with the latest literature nucleation data and our physical understanding of the properties of supercooled water.
Parameterization guidelines and considerations for hydrologic models
R. W. Malone; G. Yagow; C. Baffaut; M.W Gitau; Z. Qi; Devendra Amatya; P.B. Parajuli; J.V. Bonta; T.R. Green
2015-01-01
 Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) are important and difficult tasks. An exponential...
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-06
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
NASA Astrophysics Data System (ADS)
Zakšek, Klemen; Schroedter-Homscheidt, Marion
Some applications, e.g. from traffic or energy management, require air temperature data in high spatial and temporal resolution at two metres height above the ground ( T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (SEVIRI data aboard the MSG and MODIS data aboard Terra and Aqua satellites). The method consists of two parts. First, a downscaling procedure from the SEVIRI pixel resolution of several kilometres to a one kilometre spatial resolution is performed using a regression analysis between the land surface temperature ( LST) and the normalized differential vegetation index ( NDVI) acquired by the MODIS instrument. Second, the lapse rate between the LST and T2m is removed using an empirical parameterization that requires albedo, down-welling surface short-wave flux, relief characteristics and NDVI data. The method was successfully tested for Slovenia, the French region Franche-Comté and southern Germany for the period from May to December 2005, indicating that the parameterization is valid for Central Europe. This parameterization results in a root mean square deviation RMSD of 2.0 K during the daytime with a bias of -0.01 K and a correlation coefficient of 0.95. This is promising, especially considering the high temporal (30 min) and spatial resolution (1000 m) of the results.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
NASA Astrophysics Data System (ADS)
Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno
2018-06-01
A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.
NASA Technical Reports Server (NTRS)
Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
Landsurface hydrological parameterizations are implemented in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: (1) runoff and evapotranspiration functions that include the effects of subgrid scale spatial variability and use physically based equations of hydrologic flux at the soil surface, and (2) a realistic soil moisture diffusion scheme for the movement of water in the soil column. A one dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three dimensional GCM. Results of the final simulation with the GISS GCM and the new landsurface hydrology indicate that the runoff rate, especially in the tropics is significantly improved. As a result, the remaining components of the heat and moisture balance show comparable improvements when compared to observations. The validation of model results is carried from the large global (ocean and landsurface) scale, to the zonal, continental, and finally the finer river basin scales.
A skeleton family generator via physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2009-01-01
This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.
NASA Astrophysics Data System (ADS)
Alexander, M. Joan; Stephan, Claudia
2015-04-01
In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.
NASA Astrophysics Data System (ADS)
Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.
2013-12-01
Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.
NASA Astrophysics Data System (ADS)
Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.
2017-12-01
At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.
Parameterization guidelines and considerations for hydrologic models
USDA-ARS?s Scientific Manuscript database
Imparting knowledge of the physical processes of a system to a model and determining a set of parameter values for a hydrologic or water quality model application (i.e., parameterization) is an important and difficult task. An exponential increase in literature has been devoted to the use and develo...
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
Climate Process Team "Representing calving and iceberg dynamics in global climate models"
NASA Astrophysics Data System (ADS)
Sergienko, O. V.; Adcroft, A.; Amundson, J. M.; Bassis, J. N.; Hallberg, R.; Pollard, D.; Stearns, L. A.; Stern, A. A.
2016-12-01
Iceberg calving accounts for approximately 50% of the ice mass loss from the Greenland and Antarctic ice sheets. By changing a glacier's geometry, calving can also significantly perturb the glacier's stress-regime far upstream of the grounding line. This process can enhance discharge of ice across the grounding line. Once calved, icebergs drift into the open ocean where they melt, injecting freshwater to the ocean and affecting the large-scale ocean circulation. The spatial redistribution of the freshwater flux have strong impact on sea-ice formation and its spatial variability. A Climate Process Team "Representing calving and iceberg dynamics in global climate models" was established in the fall 2014. The major objectives of the CPT are: (1) develop parameterizations of calving processes that are suitable for continental-scale ice-sheet models that simulate the evolution of the Antarctic and Greenland ice sheets; (2) compile the data sets of the glaciological and oceanographic observations that are necessary to test, validate and constrain the developed parameterizations and models; (3) develop a physically based iceberg component for inclusion in the large-scale ocean circulation model. Several calving parameterizations based suitable for various glaciological settings have been developed and implemented in a continental-scale ice sheet model. Simulations of the present-day Antarctic and Greenland ice sheets show that the ice-sheet geometric configurations (thickness and extent) are sensitive to the calving process. In order to guide the development as well as to test calving parameterizations, available observations (of various kinds) have been compiled and organized into a database. Monthly estimates of iceberg distribution around the coast of Greenland have been produced with a goal of constructing iceberg size distribution and probability functions for iceberg occurrence in particular regions. A physically based iceberg model component was used in a GFDL global climate model. The simulation results show that the Antarctic iceberg calving-size distribution affects iceberg trajectories, determines where iceberg meltwater enters the ocean and the increased ice-berg freshwater transport leads to increased sea-ice growth around much of the East Antarctic coastline.
2013-09-30
Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM W. Erick Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529...Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6
An Novel Continuation Power Flow Method Based on Line Voltage Stability Index
NASA Astrophysics Data System (ADS)
Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan
2018-01-01
An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.
Update of global TC simulations using a variable resolution non-hydrostatic model
NASA Astrophysics Data System (ADS)
Park, S. H.
2017-12-01
Using in a variable resolution meshes in MPAS during 2017 summer., Tropical cyclone (TC) forecasts are simulated. Two physics suite are tested to explore performance and bias of each physics suite for TC forecasting. A WRF physics suite is selected from experience on weather forecasting and CAM (Community Atmosphere Model) physics is taken from a AMIP type climate simulation. Based on the last year results from CAM5 physical parameterization package and comparing with WRF physics, we investigated a issue with intensity bias using updated version of CAM physics (CAM6). We also compared these results with coupled version of TC simulations. During this talk, TC structure will be compared specially around of boundary layer and investigate their relationship between TC intensity and different physics package.
NASA Astrophysics Data System (ADS)
Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.
2012-04-01
Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.
NASA Technical Reports Server (NTRS)
Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.
1993-01-01
New land-surface hydrologic parameterizations are implemented into the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: 1) runoff and evapotranspiration functions that include the effects of subgrid-scale spatial variability and use physically based equations of hydrologic flux at the soil surface and 2) a realistic soil moisture diffusion scheme for the movement of water and root sink in the soil column. A one-dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three-dimensional GCM. Results of the final simulation with the GISS GCM and the new land-surface hydrology indicate that the runoff rate, especially in the tropics, is significantly improved. As a result, the remaining components of the heat and moisture balance show similar improvements when compared to observations. The validation of model results is carried from the large global (ocean and land-surface) scale to the zonal, continental, and finally the regional river basin scales.
Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.
Goodin, Christopher
2013-05-01
The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.
mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons
NASA Astrophysics Data System (ADS)
Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris
2018-02-01
mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs
NASA Astrophysics Data System (ADS)
Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Pincus, R.
2016-12-01
A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC is now incorporated into a version of GFS, as well as into the next generation of the NCEP global model - NOAA Environmental Modeling System (NEMS). Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these variables. Radiative transfer parameterization uses cloudiness computed by SHOC.Outstanding problems include high level tropical cloud fraction being too high in SHOC runs, possibly related to the interaction of SHOC with condensate detrained from deep convection.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate tests with prescribed SSTs. Depending on the results, the model will be tuned or parameterizations modified. Next, SHOC will be implemented in the NCEP CFS, and tuned and evaluated for climate applications - seasonal prediction and long coupled climate runs. Impact of new physics on ENSO, MJO, ISO, monsoon variability, etc will be examined.
V and V Efforts of Auroral Precipitation Models: Preliminary Results
NASA Technical Reports Server (NTRS)
Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael
2011-01-01
Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.
Dynamic Biological Functioning Important for Simulating and Stabilizing Ocean Biogeochemistry
NASA Astrophysics Data System (ADS)
Buchanan, P. J.; Matear, R. J.; Chase, Z.; Phipps, S. J.; Bindoff, N. L.
2018-04-01
The biogeochemistry of the ocean exerts a strong influence on the climate by modulating atmospheric greenhouse gases. In turn, ocean biogeochemistry depends on numerous physical and biological processes that change over space and time. Accurately simulating these processes is fundamental for accurately simulating the ocean's role within the climate. However, our simulation of these processes is often simplistic, despite a growing understanding of underlying biological dynamics. Here we explore how new parameterizations of biological processes affect simulated biogeochemical properties in a global ocean model. We combine 6 different physical realizations with 6 different biogeochemical parameterizations (36 unique ocean states). The biogeochemical parameterizations, all previously published, aim to more accurately represent the response of ocean biology to changing physical conditions. We make three major findings. First, oxygen, carbon, alkalinity, and phosphate fields are more sensitive to changes in the ocean's physical state. Only nitrate is more sensitive to changes in biological processes, and we suggest that assessment protocols for ocean biogeochemical models formally include the marine nitrogen cycle to assess their performance. Second, we show that dynamic variations in the production, remineralization, and stoichiometry of organic matter in response to changing environmental conditions benefit the simulation of ocean biogeochemistry. Third, dynamic biological functioning reduces the sensitivity of biogeochemical properties to physical change. Carbon and nitrogen inventories were 50% and 20% less sensitive to physical changes, respectively, in simulations that incorporated dynamic biological functioning. These results highlight the importance of a dynamic biology for ocean properties and climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Ma, Po-Lun; Xiao, Heng
2013-08-29
The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less
Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate
NASA Technical Reports Server (NTRS)
Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.
2001-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.
Tuning a physically-based model of the air-sea gas transfer velocity
NASA Astrophysics Data System (ADS)
Jeffery, C. D.; Robinson, I. S.; Woolf, D. K.
Air-sea gas transfer velocities are estimated for one year using a 1-D upper-ocean model (GOTM) and a modified version of the NOAA-COARE transfer velocity parameterization. Tuning parameters are evaluated with the aim of bringing the physically based NOAA-COARE parameterization in line with current estimates, based on simple wind-speed dependent models derived from bomb-radiocarbon inventories and deliberate tracer release experiments. We suggest that A = 1.3 and B = 1.0, for the sub-layer scaling parameter and the bubble mediated exchange, respectively, are consistent with the global average CO 2 transfer velocity k. Using these parameters and a simple 2nd order polynomial approximation, with respect to wind speed, we estimate a global annual average k for CO 2 of 16.4 ± 5.6 cm h -1 when using global mean winds of 6.89 m s -1 from the NCEP/NCAR Reanalysis 1 1954-2000. The tuned model can be used to predict the transfer velocity of any gas, with appropriate treatment of the dependence on molecular properties including the strong solubility dependence of bubble-mediated transfer. For example, an initial estimate of the global average transfer velocity of DMS (a relatively soluble gas) is only 11.9 cm h -1 whilst for less soluble methane the estimate is 18.0 cm h -1.
NASA Astrophysics Data System (ADS)
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Obtaining sub-daily new snow density from automated measurements in high mountain regions
NASA Astrophysics Data System (ADS)
Helfricht, Kay; Hartl, Lea; Koch, Roland; Marty, Christoph; Olefs, Marc
2018-05-01
The density of new snow is operationally monitored by meteorological or hydrological services at daily time intervals, or occasionally measured in local field studies. However, meteorological conditions and thus settling of the freshly deposited snow rapidly alter the new snow density until measurement. Physically based snow models and nowcasting applications make use of hourly weather data to determine the water equivalent of the snowfall and snow depth. In previous studies, a number of empirical parameterizations were developed to approximate the new snow density by meteorological parameters. These parameterizations are largely based on new snow measurements derived from local in situ measurements. In this study a data set of automated snow measurements at four stations located in the European Alps is analysed for several winter seasons. Hourly new snow densities are calculated from the height of new snow and the water equivalent of snowfall. Considering the settling of the new snow and the old snowpack, the average hourly new snow density is 68 kg m-3, with a standard deviation of 9 kg m-3. Seven existing parameterizations for estimating new snow densities were tested against these data, and most calculations overestimate the hourly automated measurements. Two of the tested parameterizations were capable of simulating low new snow densities observed at sheltered inner-alpine stations. The observed variability in new snow density from the automated measurements could not be described with satisfactory statistical significance by any of the investigated parameterizations. Applying simple linear regressions between new snow density and wet bulb temperature based on the measurements' data resulted in significant relationships (r2 > 0.5 and p ≤ 0.05) for single periods at individual stations only. Higher new snow density was calculated for the highest elevated and most wind-exposed station location. Whereas snow measurements using ultrasonic devices and snow pillows are appropriate for calculating station mean new snow densities, we recommend instruments with higher accuracy e.g. optical devices for more reliable investigations of the variability of new snow densities at sub-daily intervals.
NASA Technical Reports Server (NTRS)
Pawson, S.; Stolarski, R.S.; Nielsen, J.E.; Perlwitz, J.; Oman, L.; Waugh, D.
2009-01-01
This study will document the behavior of the polar vortices in two versions of the GEOS CCM. Both versions of the model include the same stratospheric chemistry, They differ in the underlying circulation model. Version 1 of the GEOS CCM is based on the Goddard Earth Observing System, Version 4, general circulation model which includes the finite-volume (Lin-Rood) dynamical core and physical parameterizations from Community Climate Model, Version 3. GEOS CCM Version 2 is based on the GEOS-5 GCM that includes a different tropospheric physics package. Baseline simulations of both models, performed at two-degree spatial resolution, show some improvements in Version 2, but also some degradation, In the Antarctic, both models show an over-persistent stratospheric polar vortex with late breakdown, but the year-to-year variations that are overestimated in Version I are more realistic in Version 2. The implications of this for the interactions with tropospheric climate, the Southern Annular Mode, will be discussed. In the Arctic both model versions show a dominant dynamically forced variabi;ity, but Version 2 has a persistent warm bias in the low stratosphere and there are seasonal differences in the simulations. These differences will be quantified in terms of climate change and ozone loss. Impacts of model resolution, using simulations at one-degree and half-degree, and changes in physical parameterizations (especially the gravity wave drag) will be discussed.
NASA Astrophysics Data System (ADS)
Xue, L.; Firl, G.; Zhang, M.; Jimenez, P. A.; Gill, D.; Carson, L.; Bernardet, L.; Brown, T.; Dudhia, J.; Nance, L. B.; Stark, D. R.
2017-12-01
The Global Model Test Bed (GMTB) has been established to support the evolution of atmospheric physical parameterizations in NCEP global modeling applications. To accelerate the transition to the Next Generation Global Prediction System (NGGPS), a collaborative model development framework known as the Common Community Physics Package (CCPP) is created within the GMTB to facilitate engagement from the broad community on physics experimentation and development. A key component to this Research to Operation (R2O) software framework is the Interoperable Physics Driver (IPD) that hooks the physics parameterizations from one end to the dynamical cores on the other end with minimum implementation effort. To initiate the CCPP, scientists and engineers from the GMTB separated and refactored the GFS physics. This exercise demonstrated the process of creating IPD-compliant code and can serve as an example for other physics schemes to do the same and be considered for inclusion into the CCPP. Further benefits to this process include run-time physics suite configuration and considerably reduced effort for testing modifications to physics suites through GMTB's physics test harness. The implementation will be described and the preliminary results will be presented at the conference.
Importance of Physico-Chemical Properties of Aerosols in the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, S. A.; Girard, E.
2014-12-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation are poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TIC-1 are composed by non-precipitating very small (radar-unseen) ice crystals whereas TIC-2 are detected by both sensors and are characterized by a low concentration of large precipitating ice crystals. It is hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibit the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a smaller concentration of larger ice crystals. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation have been developed to reflect the various physical and chemical properties of aerosols. These parameterizations are derived from laboratory studies on aerosols of different chemical compositions. The parameterizations are also developed according to two main approaches: stochastic (that nucleation is a probabilistic process, which is time dependent) and singular (that nucleation occurs at fixed conditions of temperature and humidity and time-independent). This research aims to better understand the formation process of TICs using a newly-developed ice nucleation parameterizations. For this purpose, we implement some parameterizations (2 approaches) into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Arctic Cloud (ISDAC) in Alaska. We use both approaches but special attention is focused on the new parameterizations of the singular approach. Simulation results of the TICs-2 observed on April 15th and 25th (polluted or acidic cases) and TICs-1 observed on April 5th (non-polluted cases) will be presented.
Djae, Tanalou; Bravin, Matthieu N; Garnier, Cédric; Doelsch, Emmanuel
2017-04-01
Parameterizing speciation models by setting the percentage of dissolved organic matter (DOM) that is reactive (% r-DOM) toward metal cations at a single 65% default value is very common in predictive ecotoxicology. The authors tested this practice by comparing the free copper activity (pCu 2+ = -log 10 [Cu 2+ ]) measured in 55 soil sample solutions with pCu 2+ predicted with the Windermere humic aqueous model (WHAM) parameterized by default. Predictions of Cu toxicity to soil organisms based on measured or predicted pCu 2+ were also compared. Default WHAM parameterization substantially skewed the prediction of measured pCu 2+ by up to 2.7 pCu 2+ units (root mean square residual = 0.75-1.3) and subsequently the prediction of Cu toxicity for microbial functions, invertebrates, and plants by up to 36%, 45%, and 59% (root mean square residuals ≤9 %, 11%, and 17%), respectively. Reparametrizing WHAM by optimizing the 2 DOM binding properties (i.e., % r-DOM and the Cu complexation constant) within a physically realistic value range much improved the prediction of measured pCu 2+ (root mean square residual = 0.14-0.25). Accordingly, this WHAM parameterization successfully predicted Cu toxicity for microbial functions, invertebrates, and plants (root mean square residual ≤3.4%, 4.4%, and 5.8%, respectively). Thus, it is essential to account for the real heterogeneity in DOM binding properties for relatively accurate prediction of Cu speciation in soil solution and Cu toxic effects on soil organisms. Environ Toxicol Chem 2017;36:898-905. © 2016 SETAC. © 2016 SETAC.
Mirus, Benjamin B.
2015-01-01
Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
New Concepts for Refinement of Cumulus Parameterization in GCM's the Arakawa-Schubert Framework
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.; Lau, William (Technical Monitor)
2002-01-01
Several state-of-the-art models including the one employed in this study use the Arakawa-Schubert framework for moist convection, and Sundqvist formulation of stratiform. clouds, for moist physics, in-cloud condensation, and precipitation. Despite a variety of cloud parameterization methodologies developed by several modelers including the authors, most of the parameterized cloud-models have similar deficiencies. These consist of: (a) not enough shallow clouds, (b) too many deep clouds; (c) several layers of clouds in a vertically demoralized model as opposed to only a few levels of observed clouds, and (d) higher than normal incidence of double ITCZ (Inter-tropical Convergence Zone). Even after several upgrades consisting of a sophisticated cloud-microphysics and sub-grid scale orographic precipitation into the Data Assimilation Office (DAO)'s atmospheric model (called GEOS-2 GCM) at two different resolutions, we found that the above deficiencies remained persistent. The two empirical solutions often used to counter the aforestated deficiencies consist of a) diffusion of moisture and heat within the lower troposphere to artificially force the shallow clouds; and b) arbitrarily invoke evaporation of in-cloud water for low-level clouds. Even though helpful, these implementations lack a strong physical rationale. Our research shows that two missing physical conditions can ameliorate the aforestated cloud-parameterization deficiencies. First, requiring an ascending cloud airmass to be saturated at its starting point will not only make the cloud instantly buoyant all through its ascent, but also provide the essential work function (buoyancy energy) that would promote more shallow clouds. Second, we argue that training clouds that are unstable to a finite vertical displacement, even if neutrally buoyant in their ambient environment, must continue to rise and entrain causing evaporation of in-cloud water. These concepts have not been invoked in any of the cloud parameterization schemes so far. We introduced them into the DAO-GEOS-2 GCM with McRAS (Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme).
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Physical initialization using SSM/I rain rates
NASA Technical Reports Server (NTRS)
Krishnamurti, T. N.; Bedi, H. S.; Ingles, Kevin
1993-01-01
Following our recent study on physical initialization for tropical prediction using rain rates based on outgoing long-wave radiation, the present study demonstrates a major improvement from the use of microwave radiance-based rain rates. A rain rate algorithm is used on the data from a special sensor microwave instrument (SSM/I). The initialization, as before, uses a reverse surface similarity theory, a reverse cumulus parameterization algorithm, and a bisection method to minimize the difference between satellite-based and the model-based outgoing long-wave radiation. These are invoked within a preforecast Newtonian relaxation phase of the initialization. These tests are carried out with a high-resolution global spectral model. The impact of the initialization on forecast is tested for a complex triple typhoon scenario over the Western Pacific Ocean during September 1987. A major impact from the inclusion of the SSM/I is demonstrated. Also addressed are the spin-up issues related to the typhoon structure and the improved water budget from the physical initialization.
A test harness for accelerating physics parameterization advancements into operations
NASA Astrophysics Data System (ADS)
Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.
2017-12-01
The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Sarah
2015-12-01
The dual objectives of this project were improving our basic understanding of processes that control cirrus microphysical properties and improvement of the representation of these processes in the parameterizations. A major effort in the proposed research was to integrate, calibrate, and better understand the uncertainties in all of these measurements.
A new parameterization of the post-fire snow albedo effect
NASA Astrophysics Data System (ADS)
Gleason, K. E.; Nolin, A. W.
2013-12-01
Mountain snowpack serves as an important natural reservoir of water: recharging aquifers, sustaining streams, and providing important ecosystem services. Reduced snowpacks and earlier snowmelt have been shown to affect fire size, frequency, and severity in the western United States. In turn, wildfire disturbance affects patterns of snow accumulation and ablation by reducing canopy interception, increasing turbulent fluxes, and modifying the surface radiation balance. Recent work shows that after a high severity forest fire, approximately 60% more solar radiation reaches the snow surface due to the reduction in canopy density. Also, significant amounts of pyrogenic carbon particles and larger burned woody debris (BWD) are shed from standing charred trees, which concentrate on the snowpack, darken its surface, and reduce snow albedo by 50% during ablation. Although the post-fire forest environment drives a substantial increase in net shortwave radiation at the snowpack surface, driving earlier and more rapid melt, hydrologic models do not explicitly incorporate forest fire disturbance effects to snowpack dynamics. The objective of this study was to parameterize the post-fire snow albedo effect due to BWD deposition on snow to better represent forest fire disturbance in modeling of snow-dominated hydrologic regimes. Based on empirical results from winter experiments, in-situ snow monitoring, and remote sensing data from a recent forest fire in the Oregon High Cascades, we characterized the post-fire snow albedo effect, and developed a simple parameterization of snowpack albedo decay in the post-fire forest environment. We modified the recession coefficient in the algorithm: α = α0 + K exp (-nr) where α = snowpack albedo, α0 = minimum snowpack albedo (≈0.4), K = constant (≈ 0.44), -n = number of days since last major snowfall, r = recession coefficient [Rohrer and Braun, 1994]. Our parameterization quantified BWD deposition and snow albedo decay rates and related these forest disturbance effects to radiative heating and snow melt rates. We validated our parameterization of the post-fire snow albedo effect at the plot scale using a physically-based, spatially-distributed snow accumulation and melt model, and in-situ eddy covariance and snow monitoring data. This research quantified wildfire impacts to snow dynamics in the Oregon High Cascades, and provided a new parameterization of post-fire drivers to changes in high elevation winter water storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiaoqing Wu; Xin-Zhong Liang; Sunwook Park
2007-01-23
The works supported by this ARM project lay the solid foundation for improving the parameterization of subgrid cloud-radiation interactions in the NCAR CCSM and the climate simulations. We have made a significant use of CRM simulations and concurrent ARM observations to produce long-term, consistent cloud and radiative property datasets at the cloud scale (Wu et al. 2006, 2007). With these datasets, we have investigated the mesoscale enhancement of cloud systems on surface heat fluxes (Wu and Guimond 2006), quantified the effects of cloud horizontal inhomogeneity and vertical overlap on the domain-averaged radiative fluxes (Wu and Liang 2005), and subsequently validatedmore » and improved the physically-based mosaic treatment of subgrid cloud-radiation interactions (Liang and Wu 2005). We have implemented the mosaic treatment into the CCM3. The 5-year (1979-1983) AMIP-type simulation showed significant impacts of subgrid cloud-radiation interaction on the climate simulations (Wu and Liang 2005). We have actively participated in CRM intercomparisons that foster the identification and physical understanding of common errors in cloud-scale modeling (Xie et al. 2005; Xu et al. 2005, Grabowski et al. 2005).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the computation of light absorption and scattering by complex and inhomogeneous particles for application to aggregates and snow grains with external and internal mixing structures. We demonstrated that a small black (BC) particle on the order of 1 μm internally mixed with snow grains could effectively reduce visible snow albedo by as much as 5–10%. Following this work and within the context of DOE support, we have made two key accomplishments presented in the attached final report.« less
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V.A.; /Fermilab; Bogacz, S.A.
Presently, there are two most frequently used parameterizations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is described by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. Considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev's vertex-to-plane adapter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda
2014-12-19
The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less
NASA Astrophysics Data System (ADS)
Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.
2006-12-01
A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The "K profile parameterization" (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JGOFS time series sites: BATS, KERFIX, Papa and HOT. One exception is the northern North Atlantic where lower grazing rates are needed, perhaps related to the dominance of mesozooplankton there. The model is therefore not globally robust in the sense that additional parameterizations are needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem models.
Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis
NASA Astrophysics Data System (ADS)
Springer, Everett P.; Cundy, Terrance W.
1987-02-01
Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Na; Zhang, Peng; Kang, Wei
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less
Haiduke, Roberto Luiz A; Bartlett, Rodney J
2018-05-14
Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.
NASA Astrophysics Data System (ADS)
Haiduke, Roberto Luiz A.; Bartlett, Rodney J.
2018-05-01
Some of the exact conditions provided by the correlated orbital theory are employed to propose new non-empirical parameterizations for exchange-correlation functionals from Density Functional Theory (DFT). This reparameterization process is based on range-separated functionals with 100% exact exchange for long-range interelectronic interactions. The functionals developed here, CAM-QTP-02 and LC-QTP, show mitigated self-interaction error, correctly predict vertical ionization potentials as the negative of eigenvalues for occupied orbitals, and provide nice excitation energies, even for challenging charge-transfer excited states. Moreover, some improvements are observed for reaction barrier heights with respect to the other functionals belonging to the quantum theory project (QTP) family. Finally, the most important achievement of these new functionals is an excellent description of vertical electron affinities (EAs) of atoms and molecules as the negative of appropriate virtual orbital eigenvalues. In this case, the mean absolute deviations for EAs in molecules are smaller than 0.10 eV, showing that physical interpretation can indeed be ascribed to some unoccupied orbitals from DFT.
Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.
NASA Astrophysics Data System (ADS)
Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.
2017-12-01
A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF demonstrated that CF is improved if SHOC is provided with correct variances of total water and MSE. Consequently, SHOC was modified to include two new prognostic equations for variances of total water and MSE, and coupled with the Chikira-Sugiyama parameterization of deep convection to include effects of detrainment on the prognostic variances.
NASA Astrophysics Data System (ADS)
Mölg, Thomas; Cullen, Nicolas J.; Kaser, Georg
Broadband radiation schemes (parameterizations) are commonly used tools in glacier mass-balance modelling, but their performance at high altitude in the tropics has not been evaluated in detail. Here we take advantage of a high-quality 2 year record of global radiation (G) and incoming longwave radiation (L↓) measured on Kersten Glacier, Kilimanjaro, East Africa, at 5873 m a.s.l., to optimize parameterizations of G and L↓. We show that the two radiation terms can be related by an effective cloud-cover fraction neff, so G or L↓ can be modelled based on neff derived from measured L↓ or G, respectively. At neff = 1, G is reduced to 35% of clear-sky G, and L↓ increases by 45-65% (depending on altitude) relative to clear-sky L↓. Validation for a 1 year dataset of G and L↓ obtained at 4850 m on Glaciar Artesonraju, Peruvian Andes, yields a satisfactory performance of the radiation scheme. Whether this performance is acceptable for mass-balance studies of tropical glaciers is explored by applying the data from Glaciar Artesonraju to a physically based mass-balance model, which requires, among others, G and L↓ as forcing variables. Uncertainties in modelled mass balance introduced by the radiation parameterizations do not exceed those that can be caused by errors in the radiation measurements. Hence, this paper provides a tool for inclusion in spatially distributed mass-balance modelling of tropical glaciers and/or extension of radiation data when only G or L↓ is measured.
NASA Astrophysics Data System (ADS)
Ecker, Madeleine; Gerschler, Jochen B.; Vogel, Jan; Käbitz, Stefan; Hust, Friedrich; Dechent, Philipp; Sauer, Dirk Uwe
2012-10-01
Battery lifetime prognosis is a key requirement for successful market introduction of electric and hybrid vehicles. This work aims at the development of a lifetime prediction approach based on an aging model for lithium-ion batteries. A multivariable analysis of a detailed series of accelerated lifetime experiments representing typical operating conditions in hybrid electric vehicle is presented. The impact of temperature and state of charge on impedance rise and capacity loss is quantified. The investigations are based on a high-power NMC/graphite lithium-ion battery with good cycle lifetime. The resulting mathematical functions are physically motivated by the occurring aging effects and are used for the parameterization of a semi-empirical aging model. An impedance-based electric-thermal model is coupled to the aging model to simulate the dynamic interaction between aging of the battery and the thermal as well as electric behavior. Based on these models different drive cycles and management strategies can be analyzed with regard to their impact on lifetime. It is an important tool for vehicle designers and for the implementation of business models. A key contribution of the paper is the parameterization of the aging model by experimental data, while aging simulation in the literature usually lacks a robust empirical foundation.
Active Subspaces of Airfoil Shape Parameterizations
NASA Astrophysics Data System (ADS)
Grey, Zachary J.; Constantine, Paul G.
2018-05-01
Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
Importance of parametrizing constraints in quantum-mechanical variational calculations
NASA Technical Reports Server (NTRS)
Chung, Kwong T.; Bhatia, A. K.
1992-01-01
In variational calculations of quantum mechanics, constraints are sometimes imposed explicitly on the wave function. These constraints, which are deduced by physical arguments, are often not uniquely defined. In this work, the advantage of parametrizing constraints and letting the variational principle determine the best possible constraint for the problem is pointed out. Examples are carried out to show the surprising effectiveness of the variational method if constraints are parameterized. It is also shown that misleading results may be obtained if a constraint is not parameterized.
NASA Technical Reports Server (NTRS)
Sud, Y.; Molod, A.
1988-01-01
The Goddard Laboratory for Atmospheres GCM is used to study the sensitivity of the simulated July circulation to modifications in the parameterization of dry and moist convection, evaporation from falling raindrops, and cloud-radiation interaction. It is shown that the Arakawa-Schubert (1974) cumulus parameterization and a more realistic dry convective mixing calculation yielded a better intertropical convergence zone over North Africa than the previous convection scheme. It is found that the physical mechanism for the improvement was the upward mixing of PBL moisture by vigorous dry convective mixing. A modified rain-evaporation parameterization which accounts for raindrop size distribution, the atmospheric relative humidity, and a typical spatial rainfall intensity distribution for convective rain was developed and implemented. This scheme led to major improvements in the monthly mean vertical profiles of relative humidity and temperature, convective and large-scale cloudiness, rainfall distributions, and mean relative humidity in the PBL.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
NASA Astrophysics Data System (ADS)
Soloviev, Alexander; Schluessel, Peter
The model presented contains interfacial, bubble-mediated, ocean mixed layer, and remote sensing components. The interfacial (direct) gas transfer dominates under conditions of low and—for quite soluble gases like CO2—moderate wind speeds. Due to the similarity between the gas and heat transfer, the temperature difference, ΔT, across the thermal molecular boundary layer (cool skin of the ocean) and the interfacial gas transfer coefficient, Kint are presumably interrelated. A coupled parameterization for ΔT and Kint has been derived in the context of a surface renewal model [Soloviev and Schluessel, 1994]. In addition to the Schmidt, Sc, and Prandtl, Pr, numbers, the important parameters are the surface Richardson number, Rƒ0, and the Keulegan number, Ke. The more readily available cool skin data are used to determine the coefficients that enter into both parameterizations. At high wind speeds, the Ke-number dependence is further verified with the formula for transformation of the surface wind stress to form drag and white capping, which follows from the renewal model. A further extension of the renewal model includes effects of solar radiation and rainfall. The bubble-mediated component incorporates the Merlivat et al. [1993] parameterization with the empirical coefficients estimated by Asher and Wanninkhof [1998]. The oceanic mixed layer component accounts for stratification effects on the air-sea gas exchange. Based on the example of GasEx-98, we demonstrate how the results of parameterization and modeling of the air-sea gas exchange can be extended to the global scale, using remote sensing techniques.
Non-traditional Physics-based Inverse Approaches for Determining a Buried Object’s Location
2008-09-01
parameterization of its time-decay curve) in dipole models ( Pasion and Oldenburg, 2001) or the amplitudes of responding magnetic sources in the NSMS...commonly in use. According to the simple dipole model ( Pasion and Oldenburg, 2001), the secondary magnetic field due to the dipole m is 3 0 1 ˆ ˆ(3...Forum, St. Louis, MO. L. R. Pasion and D. W. Oldenburg (2001), “A discrimination algorithm for UXO using time domain electromagnetics.” J. Environ
Testing general relativity in space-borne and astronomical laboratories
NASA Technical Reports Server (NTRS)
Will, Clifford M.
1989-01-01
The current status of space-based experiments and astronomical observations designed to test the theory of general relativity is surveyed. Consideration is given to tests of post-Newtonian gravity, searches for feeble short-range forces and gravitomagnetism, improved measurements of parameterized post-Newtonian parameter values, explorations of post-Newtonian physics, tests of the Einstein equivalence principle, observational tests of post-Newtonian orbital effects, and efforts to detect quadrupole and dipole radiation damping. Recent numerical results are presented in tables.
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
Performance Assessment of New Land-Surface and Planetary Boundary Layer Physics in the WRF-ARW
The Pleim-Xiu land surface model, Pleim surface layer scheme, and Asymmetric Convective Model (version 2) are now options in version 3.0 of the Weather Research and Forecasting model (WRF) Advanced Research WRF (ARW) core. These physics parameterizations were developed for the f...
Stochastic and Historical Resonances of the Unit in Physics and Psychometrics
ERIC Educational Resources Information Center
Fisher, William P., Jr.
2011-01-01
Humphry's article, "The Role of the Unit in Physics and Psychometrics," offers fundamental clarifications of measurement concepts that Fisher hopes will find a wide audience. In particular, parameterizing discrimination while preserving statistical sufficiency will indeed provide greater flexibility in accounting "for the effects of empirical…
NASA Astrophysics Data System (ADS)
Vihma, T.; Pirazzini, R.; Fer, I.; Renfrew, I. A.; Sedlar, J.; Tjernström, M.; Lüpkes, C.; Nygård, T.; Notz, D.; Weiss, J.; Marsan, D.; Cheng, B.; Birnbaum, G.; Gerland, S.; Chechin, D.; Gascard, J. C.
2014-09-01
The Arctic climate system includes numerous highly interactive small-scale physical processes in the atmosphere, sea ice, and ocean. During and since the International Polar Year 2007-2009, significant advances have been made in understanding these processes. Here, these recent advances are reviewed, synthesized, and discussed. In atmospheric physics, the primary advances have been in cloud physics, radiative transfer, mesoscale cyclones, coastal, and fjordic processes as well as in boundary layer processes and surface fluxes. In sea ice and its snow cover, advances have been made in understanding of the surface albedo and its relationships with snow properties, the internal structure of sea ice, the heat and salt transfer in ice, the formation of superimposed ice and snow ice, and the small-scale dynamics of sea ice. For the ocean, significant advances have been related to exchange processes at the ice-ocean interface, diapycnal mixing, double-diffusive convection, tidal currents and diurnal resonance. Despite this recent progress, some of these small-scale physical processes are still not sufficiently understood: these include wave-turbulence interactions in the atmosphere and ocean, the exchange of heat and salt at the ice-ocean interface, and the mechanical weakening of sea ice. Many other processes are reasonably well understood as stand-alone processes but the challenge is to understand their interactions with and impacts and feedbacks on other processes. Uncertainty in the parameterization of small-scale processes continues to be among the greatest challenges facing climate modelling, particularly in high latitudes. Further improvements in parameterization require new year-round field campaigns on the Arctic sea ice, closely combined with satellite remote sensing studies and numerical model experiments.
NASA Astrophysics Data System (ADS)
Vihma, T.; Pirazzini, R.; Renfrew, I. A.; Sedlar, J.; Tjernström, M.; Nygård, T.; Fer, I.; Lüpkes, C.; Notz, D.; Weiss, J.; Marsan, D.; Cheng, B.; Birnbaum, G.; Gerland, S.; Chechin, D.; Gascard, J. C.
2013-12-01
The Arctic climate system includes numerous highly interactive small-scale physical processes in the atmosphere, sea ice, and ocean. During and since the International Polar Year 2007-2008, significant advances have been made in understanding these processes. Here these advances are reviewed, synthesized and discussed. In atmospheric physics, the primary advances have been in cloud physics, radiative transfer, mesoscale cyclones, coastal and fjordic processes, as well as in boundary-layer processes and surface fluxes. In sea ice and its snow cover, advances have been made in understanding of the surface albedo and its relationships with snow properties, the internal structure of sea ice, the heat and salt transfer in ice, the formation of super-imposed ice and snow ice, and the small-scale dynamics of sea ice. In the ocean, significant advances have been related to exchange processes at the ice-ocean interface, diapycnal mixing, tidal currents and diurnal resonance. Despite this recent progress, some of these small-scale physical processes are still not sufficiently understood: these include wave-turbulence interactions in the atmosphere and ocean, the exchange of heat and salt at the ice-ocean interface, and the mechanical weakening of sea ice. Many other processes are reasonably well understood as stand-alone processes but challenge is to understand their interactions with, and impacts and feedbacks on, other processes. Uncertainty in the parameterization of small-scale processes continues to be among the largest challenges facing climate modeling, and nowhere is this more true than in the Arctic. Further improvements in parameterization require new year-round field campaigns on the Arctic sea ice, closely combined with satellite remote sensing studies and numerical model experiments.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
NASA Astrophysics Data System (ADS)
Zhang, G.; Chen, F.; Gan, Y.
2017-12-01
Assessing and mitigating uncertainties in the Noah-MP land-model simulations over the Tibet Plateau region Guo Zhang1, Fei Chen1,2, Yanjun Gan11State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing, China 2National Center for Atmospheric Research, Boulder, Colorado, USA Uncertainties in the Noah with multiparameterization (Noah-MP) land surface model were assessed through physics ensemble simulations for four sparsely-vegetated sites located in the Tibetan Plateau region. Those simulations were evaluated using observations at the four sites during the third Tibetan Plateau Experiment (TIPEX III).The impacts of uncertainties in precipitation data used as forcing conditions, parameterizations of sub-processes such as soil organic matter and rhizosphere on physics-ensemble simulations are identified using two different methods: the natural selection and Tukey's test. This study attempts to answer the following questions: 1) what is the relative contribution of precipitation-forcing uncertainty to the overall uncertainty range of Noah-MP simulations at those sites as compared to that at a more moisture and densely vegetated site; 2) what are the most sensitive physical parameterization for those sites; 3) can we identify the parameterizations that need to be improved? The investigation was conducted by evaluating simulated seasonal evolution of soil temperature, soilmoisture, surface heat fluxes through a number of Noah-MP ensemble simulations.
NASA Astrophysics Data System (ADS)
Reed, K. A.; Jablonowski, C.
2011-02-01
This paper explores the impact of the physical parameterization suite on the evolution of an idealized tropical cyclone within the National Center for Atmospheric Research's (NCAR) Community Atmosphere Model (CAM). The CAM versions 3.1 and 4 are used to study the development of an initially weak vortex in an idealized environment over a 10-day simulation period within an aqua-planet setup. The main distinction between CAM 3.1 and CAM 4 lies within the physical parameterization of deep convection. CAM 4 now includes a dilute plume Convective Available Potential Energy (CAPE) calculation and Convective Momentum Transport (CMT). The finite-volume dynamical core with 26 vertical levels in aqua-planet mode is used at horizontal grid spacings of 1.0°, 0.5° and 0.25°. It is revealed that CAM 4 produces stronger and larger tropical cyclones by day 10 at all resolutions, with a much earlier onset of intensification when compared to CAM 3.1. At the highest resolution CAM 4 also accounts for changes in the storm's vertical structure, such as an increased outward slope of the wind contours with height, when compared to CAM 3.1. An investigation concludes that the new dilute CAPE calculation in CAM 4 is largely responsible for the changes observed in the development, strength and structure of the tropical cyclone.
Merged data models for multi-parameterized querying: Spectral data base meets GIS-based map archive
NASA Astrophysics Data System (ADS)
Naß, A.; D'Amore, M.; Helbert, J.
2017-09-01
Current and upcoming planetary missions deliver a huge amount of different data (remote sensing data, in-situ data, and derived products). Within this contribution present how different data (bases) can be managed and merged, to enable multi-parameterized querying based on the constant spatial context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.
2015-04-18
With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.
NASA Astrophysics Data System (ADS)
Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.
2017-12-01
NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.
Antarctic sub-shelf melt rates via PICO
NASA Astrophysics Data System (ADS)
Reese, Ronja; Albrecht, Torsten; Mengel, Matthias; Asay-Davis, Xylar; Winkelmann, Ricarda
2018-06-01
Ocean-induced melting below ice shelves is one of the dominant drivers for mass loss from the Antarctic Ice Sheet at present. An appropriate representation of sub-shelf melt rates is therefore essential for model simulations of marine-based ice sheet evolution. Continental-scale ice sheet models often rely on simple melt-parameterizations, in particular for long-term simulations, when fully coupled ice-ocean interaction becomes computationally too expensive. Such parameterizations can account for the influence of the local depth of the ice-shelf draft or its slope on melting. However, they do not capture the effect of ocean circulation underneath the ice shelf. Here we present the Potsdam Ice-shelf Cavity mOdel (PICO), which simulates the vertical overturning circulation in ice-shelf cavities and thus enables the computation of sub-shelf melt rates consistent with this circulation. PICO is based on an ocean box model that coarsely resolves ice shelf cavities and uses a boundary layer melt formulation. We implement it as a module of the Parallel Ice Sheet Model (PISM) and evaluate its performance under present-day conditions of the Southern Ocean. We identify a set of parameters that yield two-dimensional melt rate fields that qualitatively reproduce the typical pattern of comparably high melting near the grounding line and lower melting or refreezing towards the calving front. PICO captures the wide range of melt rates observed for Antarctic ice shelves, with an average of about 0.1 m a-1 for cold sub-shelf cavities, for example, underneath Ross or Ronne ice shelves, to 16 m a-1 for warm cavities such as in the Amundsen Sea region. This makes PICO a computationally feasible and more physical alternative to melt parameterizations purely based on ice draft geometry.
Structure and Dynamics of the Quasi-Biennial Oscillation in MERRA-2.
Coy, Lawrence; Wargan, Krzysztof; Molod, Andrea M; McCarty, William R; Pawson, Steven
2016-07-01
The structure, dynamics, and ozone signal of the Quasi-Biennial Oscillation produced by the 35-year NASA MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications) reanalysis are examined based on monthly mean output. Along with the analysis of the QBO in assimilation winds and ozone, the QBO forcings created by assimilated observations, dynamics, parameterized gravity wave drag, and ozone chemistry parameterization are examined and compared with the original MERRA system. Results show that the MERRA-2 reanalysis produces a realistic QBO in the zonal winds, mean meridional circulation, and ozone over the 1980-2015 time period. In particular, the MERRA-2 zonal winds show improved representation of the QBO 50 hPa westerly phase amplitude at Singapore when compared to MERRA. The use of limb ozone observations creates improved vertical structure and realistic downward propagation of the ozone QBO signal during times when the MLS ozone limb observations are available (October 2004 to present). The increased equatorial GWD in MERRA-2 has reduced the zonal wind data analysis contribution compared to MERRA so that the QBO mean meridional circulation can be expected to be more physically forced and therefore more physically consistent. This can be important for applications in which MERRA-2 winds are used to drive transport experiments.
Structure and Dynamics of the Quasi-Biennial Oscillation in MERRA-2
Coy, Lawrence; Wargan, Krzysztof; Molod, Andrea M.; McCarty, William R.; Pawson, Steven
2018-01-01
The structure, dynamics, and ozone signal of the Quasi-Biennial Oscillation produced by the 35-year NASA MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications) reanalysis are examined based on monthly mean output. Along with the analysis of the QBO in assimilation winds and ozone, the QBO forcings created by assimilated observations, dynamics, parameterized gravity wave drag, and ozone chemistry parameterization are examined and compared with the original MERRA system. Results show that the MERRA-2 reanalysis produces a realistic QBO in the zonal winds, mean meridional circulation, and ozone over the 1980–2015 time period. In particular, the MERRA-2 zonal winds show improved representation of the QBO 50 hPa westerly phase amplitude at Singapore when compared to MERRA. The use of limb ozone observations creates improved vertical structure and realistic downward propagation of the ozone QBO signal during times when the MLS ozone limb observations are available (October 2004 to present). The increased equatorial GWD in MERRA-2 has reduced the zonal wind data analysis contribution compared to MERRA so that the QBO mean meridional circulation can be expected to be more physically forced and therefore more physically consistent. This can be important for applications in which MERRA-2 winds are used to drive transport experiments. PMID:29551854
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.
2000-01-01
Refinements to the snow-physics scheme of SSiB (Simplified Simple Biosphere Model) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers - the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snow melt in GSWP-style simulations using ISLSCP Initiative I data for 1987-1988 in the Russian Wheat Belt region. To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further help to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations. In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-year single-site dataset from Valdai Russia, the version of SSiB described in this paper again produces the earliest onset of snow melt. Soil moisture and deep soil temperatures also compare favorably with observations.
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
Cosmological applications of Padé approximant
NASA Astrophysics Data System (ADS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.
1990-01-01
A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko
A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
Donahue, Aaron S.; Caldwell, Peter M.
2018-02-02
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
NASA Astrophysics Data System (ADS)
Donahue, Aaron S.; Caldwell, Peter M.
2018-02-01
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donahue, Aaron S.; Caldwell, Peter M.
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effectmore » of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.« less
Clein, Joy S.; McGuire, A.D.; Zhang, X.; Kicklighter, D.W.; Melillo, J.M.; Wofsy, S.C.; Jarvis, P.G.; Massheder, J.M.
2002-01-01
The role of carbon (C) and nitrogen (N) interactions on sequestration of atmospheric CO2 in black spruce ecosystems across North America was evaluated with the Terrestrial Ecosystem Model (TEM) by applying parameterizations of the model in which C-N dynamics were either coupled or uncoupled. First, the performance of the parameterizations, which were developed for the dynamics of black spruce ecosystems at the Bonanza Creek Long-Term Ecological Research site in Alaska, were evaluated by simulating C dynamics at eddy correlation tower sites in the Boreal Ecosystem Atmosphere Study (BOREAS) for black spruce ecosystems in the northern study area (northern site) and the southern study area (southern site) with local climate data. We compared simulated monthly growing season (May to September) estimates of gross primary production (GPP), total ecosystem respiration (RESP), and net ecosystem production (NEP) from 1994 to 1997 to available field-based estimates at both sites. At the northern site, monthly growing season estimates of GPP and RESP for the coupled and uncoupled simulations were highly correlated with the field-based estimates (coupled: R2= 0.77, 0.88 for GPP and RESP; uncoupled: R2 = 0.67, 0.92 for GPP and RESP). Although the simulated seasonal pattern of NEP generally matched the field-based data, the correlations between field-based and simulated monthly growing season NEP were lower (R2 = 0.40, 0.00 for coupled and uncoupled simulations, respectively) in comparison to the correlations between field-based and simulated GPP and RESP. The annual NEP simulated by the coupled parameterization fell within the uncertainty of field-based estimates in two of three years. On the other hand, annual NEP simulated by the uncoupled parameterization only fell within the field-based uncertainty in one of three years. At the southern site, simulated NEP generally matched field-based NEP estimates, and the correlation between monthly growing season field-based and simulated NEP (R2 = 0.36, 0.20 for coupled and uncoupled simulations, respectively) was similar to the correlations at the northern site. To evaluate the role of N dynamics in C balance of black spruce ecosystems across North America, we simulated historical and projected C dynamics from 1900 to 2100 with a global-based climatology at 0.5?? resolution (latitude ?? longitude) with both the coupled and uncoupled parameterizations of TEM. From analyses at the northern site, several consistent patterns emerge. There was greater inter-annual variability in net primary production (NPP) simulated by the uncoupled parameterization as compared to the coupled parameterization, which led to substantial differences in inter-annual variability in NEP between the parameterizations. The divergence between NPP and heterotrophic respiration was greater in the uncoupled simulation, resulting in more C sequestration during the projected period. These responses were the result of fundamentally different responses of the coupled and uncoupled parameterizations to changes in CO2 and climate. Across North American black spruce ecosystems, the range of simulated decadal changes in C storage was substantially greater for the uncoupled parameterization than for the coupled parameterization. Analysis of the spatial variability in decadal responses of C dynamics revealed that C fluxes simulated by the coupled and uncoupled parameterizations have different sensitivities to climate and that the climate sensitivities of the fluxes change over the temporal scope of the simulations. The results of this study suggest that uncertainties can be reduced through (1) factorial studies focused on elucidating the role of C and N interactions in the response of mature black spruce ecosystems to manipulations of atmospheric CO2 and climate, (2) establishment of a network of continuous, long-term measurements of C dynamics across the range of mature black spruce ecosystems in North America, and (3) ancillary measureme
NASA Astrophysics Data System (ADS)
Salamanca, Francisco; Zhang, Yizhou; Barlage, Michael; Chen, Fei; Mahalov, Alex; Miao, Shiguang
2018-03-01
We have augmented the existing capabilities of the integrated Weather Research and Forecasting (WRF)-urban modeling system by coupling three urban canopy models (UCMs) available in the WRF model with the new community Noah with multiparameterization options (Noah-MP) land surface model (LSM). The WRF-urban modeling system's performance has been evaluated by conducting six numerical experiments at high spatial resolution (1 km horizontal grid spacing) during a 15 day clear-sky summertime period for a semiarid urban environment. To assess the relative importance of representing urban surfaces, three different urban parameterizations are used with the Noah and Noah-MP LSMs, respectively, over the two major cities of Arizona: Phoenix and Tucson metropolitan areas. Our results demonstrate that Noah-MP reproduces somewhat better than Noah the daily evolution of surface skin temperature and near-surface air temperature (especially nighttime temperature) and wind speed. Concerning the urban areas, bulk urban parameterization overestimates nighttime 2 m air temperature compared to the single-layer and multilayer UCMs that reproduce more accurately the daily evolution of near-surface air temperature. Regarding near-surface wind speed, only the multilayer UCM was able to reproduce realistically the daily evolution of wind speed, although maximum winds were slightly overestimated, while both the single-layer and bulk urban parameterizations overestimated wind speed considerably. Based on these results, this paper demonstrates that the new community Noah-MP LSM coupled to an UCM is a promising physics-based predictive modeling tool for urban applications.
Wave modeling for the Beaufort and Chukchi Seas
NASA Astrophysics Data System (ADS)
Rogers, W.; Thomson, J.; Shen, H. H.; Posey, P. G.; Hebert, D. A.
2016-02-01
Authors: W. Erick Rogers(1), Jim Thomson(2), Hayley Shen (3), PamelaPosey (1), David Hebert (1) 1 Naval Research Laboratory, Stennis Space Center, Mississippi, USA2 Applied Physics Laboratory, University of Washington, Seattle,Washington, USA3 Clarkson University, Potsdam, New York, USA Abstract : In this presentation, we will discuss the development and application of numerical models for prediction of wind-generated surface gravity waves to the Arctic Ocean, and specifically the Beaufort and Chukchi Seas, for which the Office of Naval Research (ONR) has supported two major field campaigns in 2014 and 2015. The modeling platform is the spectral wave model WAVEWATCH III (R) (WW3). We will begin by reviewing progress with the model numerics in 2007 and 2008 which permits efficient application at high latitudes. Then, we will discuss more recent progress (2012 to 2015) adding new physics to WW3 for ice effects. The latter include two parameterizations for dissipation by turbulence at the ice/water interface, and a more complex parameterization which treat the ice as a viscoelastic fluid. With these new physics, the primary challenge is to find observational data suitable for calibration of the parameterization, and there are concerns about validity of application of any calibration to the wide variety of ice types that exist in the Arctic (or Southern Ocean). Quality of input is another major challenge, for which some recent progress has been made (at least in the context of ice concentration and ice edge) with data assimilative ice modeling at NRL. We will discuss our recent work to invert for dissipation rate using data from a 2012 mooring in the Beaufort Sea, how the results vary by season (ice retreat vs. advance), and what this tells us in context of those complex physical parameterizations used by the model. We will summarize plans for further development of the model, such as adding scattering by floes, through collaboration with IFREMER (France), and improving on the simple "proportional scaling" treatment of the open water source functions in presence of partial ice cover. Finally, we will discuss lessons learned for wave modeling from the autumn 2015 R/V Sikuliaq cruise supported by ONR.
The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.
2008-01-01
Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.
NASA Astrophysics Data System (ADS)
Gloege, Lucas; McKinley, Galen A.; Mouw, Colleen B.; Ciochetto, Audrey B.
2017-07-01
The shunt of photosynthetically derived particulate organic carbon (POC) from the euphotic zone and deep remineralization comprises the basic mechanism of the "biological carbon pump." POC raining through the "twilight zone" (euphotic depth to 1 km) and "midnight zone" (1 km to 4 km) is remineralized back to inorganic form through respiration. Accurately modeling POC flux is critical for understanding the "biological pump" and its impacts on air-sea CO2 exchange and, ultimately, long-term ocean carbon sequestration. Yet commonly used parameterizations have not been tested quantitatively against global data sets using identical modeling frameworks. Here we use a single one-dimensional physical-biogeochemical modeling framework to assess three common POC flux parameterizations in capturing POC flux observations from moored sediment traps and thorium-234 depletion. The exponential decay, Martin curve, and ballast model are compared to data from 11 biogeochemical provinces distributed across the globe. In each province, the model captures satellite-based estimates of surface primary production within uncertainties. Goodness of fit is measured by how well the simulation captures the observations, quantified by bias and the root-mean-square error and displayed using "target diagrams." Comparisons are presented separately for the twilight zone and midnight zone. We find that the ballast hypothesis shows no improvement over a globally or regionally parameterized Martin curve. For all provinces taken together, Martin's b that best fits the data is [0.70, 0.98]; this finding reduces by at least a factor of 3 previous estimates of potential impacts on atmospheric pCO2 of uncertainty in POC export to a more modest range [-16 ppm, +12 ppm].
A physically-based approach of treating dust-water cloud interactions in climate models
NASA Astrophysics Data System (ADS)
Kumar, P.; Karydis, V.; Barahona, D.; Sokolik, I. N.; Nenes, A.
2011-12-01
All aerosol-cloud-climate assessment studies to date assume that the ability of dust (and other insoluble species) to act as a Cloud Condensation Nuclei (CCN) is determined solely by their dry size and amount of soluble material. Recent evidence however clearly shows that dust can act as efficient CCN (even if lacking appreciable amounts of soluble material) through adsorption of water vapor onto the surface of the particle. This "inherent" CCN activity is augmented as the dust accumulates soluble material through atmospheric aging. A comprehensive treatment of dust-cloud interactions therefore requires including both of these sources of CCN activity in atmospheric models. This study presents a "unified" theory of CCN activity that considers both effects of adsorption and solute. The theory is corroborated and constrained with experiments of CCN activity of mineral aerosols generated from clays, calcite, quartz, dry lake beds and desert soil samples from Northern Africa, East Asia/China, and Northern America. The unified activation theory then is included within the mechanistic droplet activation parameterization of Kumar et al. (2009) (including the giant CCN correction of Barahona et al., 2010), for a comprehensive treatment of dust impacts on global CCN and cloud droplet number. The parameterization is demonstrated with the NASA Global Modeling Initiative (GMI) Chemical Transport Model using wind fields computed with the Goddard Institute for Space Studies (GISS) general circulation model. References Barahona, D. et al. (2010) Comprehensively Accounting for the Effect of Giant CCN in Cloud Activation Parameterizations, Atmos.Chem.Phys., 10, 2467-2473 Kumar, P., I.N. Sokolik, and A. Nenes (2009), Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN, Atmos.Chem.Phys., 9, 2517- 2532
Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; ...
2016-02-01
This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.
1992-01-01
Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.
NASA Technical Reports Server (NTRS)
Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.
2012-01-01
The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
Quality Assessment of the Cobel-Isba Numerical Forecast System of Fog and Low Clouds
NASA Astrophysics Data System (ADS)
Bergot, Thierry
2007-06-01
Short-term forecasting of fog is a difficult issue which can have a large societal impact. Fog appears in the surface boundary layer and is driven by the interactions between land surface and the lower layers of the atmosphere. These interactions are still not well parameterized in current operational NWP models, and a new methodology based on local observations, an adaptive assimilation scheme and a local numerical model is tested. The proposed numerical forecast method of foggy conditions has been run during three years at Paris-CdG international airport. This test over a long-time period allows an in-depth evaluation of the forecast quality. This study demonstrates that detailed 1-D models, including detailed physical parameterizations and high vertical resolution, can reasonably represent the major features of the life cycle of fog (onset, development and dissipation) up to +6 h. The error on the forecast onset and burn-off time is typically 1 h. The major weakness of the methodology is related to the evolution of low clouds (stratus lowering). Even if the occurrence of fog is well forecasted, the value of the horizontal visibility is only crudely forecasted. Improvements in the microphysical parameterization and in the translation algorithm converting NWP prognostic variables into a corresponding horizontal visibility seems necessary to accurately forecast the value of the visibility.
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
NASA Technical Reports Server (NTRS)
Steffen, K.; Abdalati, W.; Stroeve, J.; Nolin, A.; Box, J.; Key, J.; Zwally, J.; Stober, M.; Kreuter, J.
1996-01-01
The proposed research involves the application of multispectral satellite data in combination with ground truth measurements to monitor surface properties of the Greenland Ice Sheet which are essential for describing the energy and mass of the ice sheet. Several key components of the energy balance are parameterized using satellite data and in situ measurements. The analysis has been done for a 6 to 17 year time period in order to analyze the seasonal and interannual variations of the surface processes and the climatology. Our goal was to investigate to what accuracy and over what geographic areas large scale snow properties and radiative fluxes can be derived based upon a combination of available remote sensing and meteorological data sets. For the understanding of the surface processes a field program was designed to collect information on spectral albedo, specular reflectance, soot content, grain size and the physical properties of different snow types. Further, the radiative and turbulent fluxes at the ice/snow surface were monitored for the parameterization and interpretation of the satellite data. Highlights include AVHRR time series and surface based radiation measurements, passive microwave time series, and geodetic results from the ETH/CU camp.
NASA Astrophysics Data System (ADS)
Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.
2011-03-01
The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.
Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change
NASA Astrophysics Data System (ADS)
Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.
2017-12-01
Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293 And NASA Award NNX17AC81G.
Radiation: Physical Characterization and Environmental Measurements
NASA Technical Reports Server (NTRS)
1997-01-01
In this session, Session WP4, the discussion focuses on the following topics: Production of Neutrons from Interactions of GCR-Like Particles; Solar Particle Event Dose Distributions, Parameterization of Dose-Time Profiles; Assessment of Nuclear Events in the Body Produced by Neutrons and High-Energy Charged Particles; Ground-Based Simulations of Cosmic Ray Heavy Ion Interactions in Spacecraft and Planetary Habitat Shielding Materials; Radiation Measurements in Space Missions; Radiation Measurements in Civil Aircraft; Analysis of the Pre-Flight and Post-Flight Calibration Procedures Performed on the Liulin Space Radiation Dosimeter; and Radiation Environment Monitoring for Astronauts.
NASA Astrophysics Data System (ADS)
Borsányi, Sz.; Endrődi, G.; Fodor, Z.; Katz, S. D.; Krieg, S.; Ratti, C.; Szabó, K. K.
2012-08-01
We determine the equation of state of QCD for nonzero chemical potentials via a Taylor expansion of the pressure. The results are obtained for N f = 2 + 1 flavors of quarks with physical masses, on various lattice spacings. We present results for the pressure, interaction measure, energy density, entropy density, and the speed of sound for small chemical potentials. At low temperatures we compare our results with the Hadron Resonance Gas model. We also express our observables along trajectories of constant entropy over particle number. A simple parameterization is given (the Matlab/Octave script parameterization.m, submitted to the arXiv along with the paper), which can be used to reconstruct the observables as functions of T and μ, or as functions of T and S/N.
NASA Astrophysics Data System (ADS)
Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.
2017-12-01
Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.
James, A.L.; McDonnell, Jeffery J.; Tromp-Van Meerveld, I.; Peters, N.E.
2010-01-01
As a fundamental unit of the landscape, hillslopes are studied for their retention and release of water and nutrients across a wide range of ecosystems. The understanding of these near-surface processes is relevant to issues of runoff generation, groundwater-surface water interactions, catchment export of nutrients, dissolved organic carbon, contaminants (e.g. mercury) and ultimately surface water health. We develop a 3-D physics-based representation of the Panola Mountain Research Watershed experimental hillslope using the TOUGH2 sub-surface flow and transport simulator. A recent investigation of sub-surface flow within this experimental hillslope has generated important knowledge of threshold rainfall-runoff response and its relation to patterns of transient water table development. This work has identified components of the 3-D sub-surface, such as bedrock topography, that contribute to changing connectivity in saturated zones and the generation of sub-surface stormflow. Here, we test the ability of a 3-D hillslope model (both calibrated and uncalibrated) to simulate forested hillslope rainfall-runoff response and internal transient sub-surface stormflow dynamics. We also provide a transparent illustration of physics-based model development, issues of parameterization, examples of model rejection and usefulness of data types (e.g. runoff, mean soil moisture and transient water table depth) to the model enterprise. Our simulations show the inability of an uncalibrated model based on laboratory and field characterization of soil properties and topography to successfully simulate the integrated hydrological response or the distributed water table within the soil profile. Although not an uncommon result, the failure of the field-based characterized model to represent system behaviour is an important challenge that continues to vex scientists at many scales. We focus our attention particularly on examining the influence of bedrock permeability, soil anisotropy and drainable porosity on the development of patterns of transient groundwater and sub-surface flow. Internal dynamics of transient water table development prove to be essential in determining appropriate model parameterization. ?? 2010 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2008-01-01
This study presents an approach that converts the vertical profiles of grid-averaged cloud properties from large-scale models to probability density functions (pdfs) of subgrid-cell cloud physical properties measured at satellite footprints. Cloud physical and radiative properties, rather than just cloud and precipitation occurrences, of assimilated cloud systems by the European Center for Medium-range Weather Forecasts (ECMWF) operational analysis (EOA) and ECMWF Re-Analyses (ERA-40 and ERA Interim) are validated against those obtained from Earth Observing System satellite cloud object data for January-August 1998 and March 2000 periods. These properties include ice water path (IWP), cloud-top height and temperature, cloud optical depth and solar and infrared radiative fluxes. Each cloud object, a contiguous region with similar cloud physical properties, is temporally and spatially matched with EOA and ERA-40 data. Results indicate that most pdfs of EOA and ERA-40 cloud physical and radiative properties agree with those of satellite observations of the tropical deep convective cloud-object type for the January-August 1998 period. There are, however, significant discrepancies in selected ranges of the cloud property pdfs such as the upper range of EOA cloud top height. A major discrepancy is that the dependence of the pdfs on the cloud object size for both EOA and ERA-40 is not as strong as in the observations. Modifications to the cloud parameterization in ECMWF that occurred in October 1999 eliminate the clouds near the tropopause but shift power of the pdf to lower cloud-top heights and greatly reduce the ranges of IWP and cloud optical depth pdfs. These features persist in ERA-40 due to the use of the same cloud parameterizations. The downgrade of data assimilation technique and the lack of snow water content information in ERA-40, not the coarser horizontal grid resolution, are also responsible for the disagreements with observed pdfs of cloud physical properties although the detection rates of cloud object occurrence are improved for small size categories. A possible improvement to the convective parameterization is to introduce a stronger dependence of updraft penetration heights with grid-cell dynamics. These conclusions will be rechecked using the ERA Interim data, due to recent changes in the ECMWF convective parameterization (Bechtold et al. 2004, 2008). Results from the ERA Interim will be presented at the meeting.
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review.
Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela
2017-01-01
Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.
Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review
NASA Astrophysics Data System (ADS)
Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela
2017-11-01
Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.
Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review
NASA Astrophysics Data System (ADS)
Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela
Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and highresolution modeling on large domains are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Using Virtual Reality to Dynamically Setting an Electrical Wheelchair
NASA Astrophysics Data System (ADS)
Dir, S.; Habert, O.; Pruski, A.
2008-06-01
This work uses virtual reality to find or refine in a recurring way the best adequacy between a person with physically disability and his electrical wheelchair. A system architecture based on "Experiment→Analyze and decision-making→Modification of the wheelchair" cycles is proposed. This architecture uses a decision-making module based on a fuzzy inference system which has to be parameterized so that the system converges quickly towards the optimal solution. The first challenge consists in computing criteria which must represent as well as possible particular situations that the user meets during each navigation experiment. The second challenge consists in transforming these criteria into relevant modifications about the active or non active functionalities or into adjustment of intrinsic setting of the wheelchair. These modifications must remain most stable as possible during the successive experiments. Objectives are to find the best wheelchair to give a beginning of mobility to a given person with physically disability.
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
NASA Astrophysics Data System (ADS)
Schneider, F. D.; Leiterer, R.; Morsdorf, F.; Gastellu-Etchegorry, J.; Lauret, N.; Pfeifer, N.; Schaepman, M. E.
2013-12-01
Remote sensing offers unique potential to study forest ecosystems by providing spatially and temporally distributed information that can be linked with key biophysical and biochemical variables. The estimation of biochemical constituents of leaves from remotely sensed data is of high interest revealing insight on photosynthetic processes, plant health, plant functional types, and speciation. However, the scaling of observations at the canopy level to the leaf level or vice versa is not trivial due to the structural complexity of forests. Thus, a common solution for scaling spectral information is the use of physically-based radiative transfer models. The discrete anisotropic radiative transfer model (DART), being one of the most complete coupled canopy-atmosphere 3D radiative transfer models, was parameterized based on airborne and in-situ measurements. At-sensor radiances were simulated and compared with measurements from an airborne imaging spectrometer. The study was performed on the Laegern site, a temperate mixed forest characterized by steep slopes, a heterogeneous spectral background, and deciduous and coniferous trees at different development stages (dominated by beech trees; 47°28'42.0' N, 8°21'51.8' E, 682 m asl, Switzerland). It is one of the few studies conducted on an old-growth forest. Particularly the 3D modeling of the complex canopy architecture is crucial to model the interaction of photons with the vegetation canopy and its background. Thus, we developed two forest reconstruction approaches: 1) based on a voxel grid, and 2) based on individual tree detection. Both methods are transferable to various forest ecosystems and applicable at scales between plot and landscape. Our results show that the newly developed voxel grid approach is favorable over a parameterization based on individual trees. In comparison to the actual imaging spectrometer data, the simulated images exhibit very similar spatial patterns, whereas absolute radiance values are partially differing depending on the respective wavelength. We conclude that our proposed method provides a representation of the 3D radiative regime within old-growth forests that is suitable for simulating most spectral and spatial features of imaging spectrometer data. It indicates the potential of simulating future Earth observation missions, such as ESA's Sentinel-2. However, the high spectral variability of leaf optical properties among species has to be addressed in future radiative transfer modeling. The results further reveal that research emphasis has to be put on the accurate parameterization of small-scale structures, such as the clumping of needles into shoots or the distribution of leaf angles.
The effects of atmospheric cloud radiative forcing on climate
NASA Technical Reports Server (NTRS)
Randall, David A.
1989-01-01
In order to isolate the effects of atmospheric cloud radiative forcing (ACRF) on climate, the general circulation of an ocean-covered earth called 'Seaworld' was simulated using the Colorado State University GCM. Most current climate models, however, do not include an interactive ocean. The key simplifications in 'Seaworld' are the fixed boundary temperature with no land points, the lack of mountains and the zonal uniformity of the boundary conditions. Two 90-day 'perpetual July' simulations were performed and analyzed the last sixty days of each. The first run included all the model's physical parameterizations, while the second omitted the effects of clouds in both the solar and terrestrial radiation parameterizations. Fixed and identical boundary temperatures were set for the two runs, and resulted in differences revealing the direct and indirect effects of the ACRF on the large-scale circulation and the parameterized hydrologic processes.
Romps, David M.
2016-03-01
Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less
A Family of Poisson Processes for Use in Stochastic Models of Precipitation
NASA Astrophysics Data System (ADS)
Penland, C.
2013-12-01
Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.
New Parameterization of Neutron Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.
1997-01-01
Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.
Inducing Tropical Cyclones to Undergo Brownian Motion
NASA Astrophysics Data System (ADS)
Hodyss, D.; McLay, J.; Moskaitis, J.; Serra, E.
2014-12-01
Stochastic parameterization has become commonplace in numerical weather prediction (NWP) models used for probabilistic prediction. Here, a specific stochastic parameterization will be related to the theory of stochastic differential equations and shown to be affected strongly by the choice of stochastic calculus. From an NWP perspective our focus will be on ameliorating a common trait of the ensemble distributions of tropical cyclone (TC) tracks (or position), namely that they generally contain a bias and an underestimate of the variance. With this trait in mind we present a stochastic track variance inflation parameterization. This parameterization makes use of a properly constructed stochastic advection term that follows a TC and induces its position to undergo Brownian motion. A central characteristic of Brownian motion is that its variance increases with time, which allows for an effective inflation of an ensemble's TC track variance. Using this stochastic parameterization we present a comparison of the behavior of TCs from the perspective of the stochastic calculi of Itô and Stratonovich within an operational NWP model. The central difference between these two perspectives as pertains to TCs is shown to be properly predicted by the stochastic calculus and the Itô correction. In the cases presented here these differences will manifest as overly intense TCs, which, depending on the strength of the forcing, could lead to problems with numerical stability and physical realism.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2006-03-01
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
NASA Astrophysics Data System (ADS)
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
Flexible climate modeling systems: Lessons from Snowball Earth, Titan and Mars
NASA Astrophysics Data System (ADS)
Pierrehumbert, R. T.
2007-12-01
Climate models are only useful to the extent that real understanding can be extracted from them. Most leading- edge problems in climate change, paleoclimate and planetary climate require a high degree of flexibility in terms of incorporating model physics -- for example in allowing methane or CO2 to be a condensible substance instead of water vapor. This puts a premium on model design that allows easy modification, and on physical parameterizations that are close to fundamentals with as little empirical ad-hoc formulation as possible. I will provide examples from two approaches to this problem we have been using at the University of Chicago. The first is the FOAM general circulation model, which is a clean single-executable Fortran-77/c code supported by auxiliary applications in Python and Java. The second is a new approach based on using Python as a shell for assembling building blocks in compiled-code into full models. Applications to Snowball Earth, Titan and Mars, as well as pedagogical uses, will be discussed. One painful lesson we have learned is that Fortran-95 is a major impediment to portability and cross-language interoperability; in this light the trend toward Fortran-95 in major modelling groups is seen as a significant step backwards. In this talk, I will focus on modeling projects employing a full representation of atmospheric fluid dynamics, rather than "intermediate complexity" models in which the associated transports are parameterized.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Radiatively driven stratosphere-troposphere interactions near the tops of tropical cloud clusters
NASA Technical Reports Server (NTRS)
Churchill, Dean D.; Houze, Robert A., Jr.
1990-01-01
Results are presented of two numerical simulations of the mechanism involved in the dehydration of air, using the model of Churchill (1988) and Churchill and Houze (1990) which combines the water and ice physics parameterizations and IR and solar-radiation parameterization with a convective adjustment scheme in a kinematic nondynamic framework. One simulation, a cirrus cloud simulation, was to test the Danielsen (1982) hypothesis of a dehydration mechanism for the stratosphere; the other was to simulate the mesoscale updraft in order to test an alternative mechanism for 'freeze-drying' the air. The results show that the physical processes simulated in the mesoscale updraft differ from those in the thin-cirrus simulation. While in the thin-cirrus case, eddy fluxes occur in response to IR radiative destabilization, and, hence, no net transfer occurs between troposphere and stratosphere, the mesosphere updraft case has net upward mass transport into the lower stratosphere.
The GIS weasel - An interface for the development of spatial information in modeling
Viger, R.J.; Markstrom, S.M.; Leavesley, G.H.; ,
2005-01-01
The GIS Weasel is a map and Graphical User Interface (GUI) driven tool that has been developed as an aid to modelers in the delineation, characterization of geographic features, and their parameterization for use in distributed or lumped parameter physical process models. The interface does not require user expertise in geographic information systems (GIS). The user does need knowledge of how the model will use the output from the GIS Weasel. The GIS Weasel uses Workstation ArcInfo and its the Grid extension. The GIS Weasel will run on all platforms that Workstation ArcInfo runs (i.e. numerous flavors of Unix and Microsoft Windows).The GIS Weasel requires an input ArcInfo grid of some topographical description of the Area of Interest (AOI). This is normally a digital elevation model, but can be the surface of a ground water table or any other data that flow direction can be resolved from. The user may define the AOI as a custom drainage area based on an interactively specified watershed outlet point, or use a previously created map. The user is then able to use any combination of the GIS Weasel's tool set to create one or more maps for depicting different kinds of geographic features. Once the spatial feature maps have been prepared, then the GIS Weasel s many parameterization routines can be used to create descriptions of each element in each of the user s created maps. Over 200 parameterization routines currently exist, generating information about shape, area, and topological association with other features of the same or different maps, as well many types of information based on ancillary data layers such as soil and vegetation properties. These tools easily integrate other similarly formatted data sets.
NASA Technical Reports Server (NTRS)
Steffen, K.; Abdalati, W.; Stroeve, J.; Key, J.
1994-01-01
The proposed research involves the application of multispectral satellite data in combination with ground truth measurements to monitor surface properties of the Greenland Ice Sheet which are essential for describing the energy and mass of the ice sheet. Several key components of the energy balance are parameterized using satellite data and in situ measurements. The analysis will be done for a ten year time period in order to get statistics on the seasonal and interannual variations of the surface processes and the climatology. Our goal is to investigate to what accuracy and over what geographic areas large scale snow properties and radiative fluxes can be derived based upon a combination of available remote sensing and meteorological data sets. Operational satellite sensors are calibrated based on ground measurements and atmospheric modeling prior to large scale analysis to ensure the quality of the satellite data. Further, several satellite sensors of different spatial and spectral resolution are intercompared to access the parameter accuracy. Proposed parameterization schemes to derive key component of the energy balance from satellite data are validated. For the understanding of the surface processes a field program was designed to collect information on spectral albedo, specular reflectance, soot content, grain size and the physical properties of different snow types. Further, the radiative and turbulent fluxes at the ice/snow surface are monitored for the parameterization and interpretation of the satellite data. The expected results include several baseline data sets of albedo, surface temperature, radiative fluxes, and different snow types of the entire Greenland Ice Sheet. These climatological data sets will be of potential use for climate sensitivity studies in the context of future climate change.
NASA Astrophysics Data System (ADS)
Rusli, Stephanie P.; Donovan, David P.; Russchenberg, Herman W. J.
2017-12-01
Despite the importance of radar reflectivity (Z) measurements in the retrieval of liquid water cloud properties, it remains nontrivial to interpret Z due to the possible presence of drizzle droplets within the clouds. So far, there has been no published work that utilizes Z to identify the presence of drizzle above the cloud base in an optimized and a physically consistent manner. In this work, we develop a retrieval technique that exploits the synergy of different remote sensing systems to carry out this task and to subsequently profile the microphysical properties of the cloud and drizzle in a unified framework. This is accomplished by using ground-based measurements of Z, lidar attenuated backscatter below as well as above the cloud base, and microwave brightness temperatures. Fast physical forward models coupled to cloud and drizzle structure parameterization are used in an optimal-estimation-type framework in order to retrieve the best estimate for the cloud and drizzle property profiles. The cloud retrieval is first evaluated using synthetic signals generated from large-eddy simulation (LES) output to verify the forward models used in the retrieval procedure and the vertical parameterization of the liquid water content (LWC). From this exercise it is found that, on average, the cloud properties can be retrieved within 5 % of the mean truth. The full cloud-drizzle retrieval method is then applied to a selected ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques) campaign dataset collected in Cabauw, the Netherlands. An assessment of the retrieval products is performed using three independent methods from the literature; each was specifically developed to retrieve only the cloud properties, the drizzle properties below the cloud base, or the drizzle fraction within the cloud. One-to-one comparisons, taking into account the uncertainties or limitations of each retrieval, show that our results are consistent with what is derived using the three independent methods.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Dong, Xiquan; Kennedy, Aaron; Xi, Baike; Li, Zhanqing
2017-03-01
The planetary boundary layer turbulence and moist convection parameterizations have been modified recently in the NASA Goddard Institute for Space Studies (GISS) Model E2 atmospheric general circulation model (GCM; post-CMIP5, hereafter P5). In this study, single column model (SCM P5) simulated cloud fractions (CFs), cloud liquid water paths (LWPs) and precipitation were compared with Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) groundbased observations made during the period 2002-08. CMIP5 SCM simulations and GCM outputs over the ARM SGP region were also used in the comparison to identify whether the causes of cloud and precipitation biases resulted from either the physical parameterization or the dynamic scheme. The comparison showed that the CMIP5 SCM has difficulties in simulating the vertical structure and seasonal variation of low-level clouds. The new scheme implemented in the turbulence parameterization led to significantly improved cloud simulations in P5. It was found that the SCM is sensitive to the relaxation time scale. When the relaxation time increased from 3 to 24 h, SCM P5-simulated CFs and LWPs showed a moderate increase (10%-20%) but precipitation increased significantly (56%), which agreed better with observations despite the less accurate atmospheric state. Annual averages among the GCM and SCM simulations were almost the same, but their respective seasonal variations were out of phase. This suggests that the same physical cloud parameterization can generate similar statistical results over a long time period, but different dynamics drive the differences in seasonal variations. This study can potentially provide guidance for the further development of the GISS model.
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results
NASA Astrophysics Data System (ADS)
Lin, W.; Liu, Y.; Song, H.; Endo, S.
2011-12-01
Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.
An energy balance climate model with cloud feedbacks
NASA Technical Reports Server (NTRS)
Roads, J. O.; Vallis, G. K.
1984-01-01
The present two-level global climate model, which is based on the atmosphere-surface energy balance, includes physically based parameterizations for the exchange of heat and moisture across latitude belts and between the surface and the atmosphere, precipitation and cloud formation, and solar and IR radiation. The model field predictions obtained encompass surface and atmospheric temperature, precipitation, relative humidity, and cloudiness. In the model integrations presented, it is noted that cloudiness is generally constant with changing temperature at low latitudes. High altitude cloudiness increases with temperature, although the cloud feedback effect on the radiation field remains small because of compensating effects on thermal and solar radiation. The net global feedback by the cloud field is negative, but small.
Examining Chaotic Convection with Super-Parameterization Ensembles
NASA Astrophysics Data System (ADS)
Jones, Todd R.
This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.
Evaluation of snow modeling with Noah and Noah-MP land surface models in NCEP GFS/CFS system
NASA Astrophysics Data System (ADS)
Dong, J.; Ek, M. B.; Wei, H.; Meng, J.
2017-12-01
Land surface serves as lower boundary forcing in global forecast system (GFS) and climate forecast system (CFS), simulating interactions between land and the atmosphere. Understanding the underlying land model physics is a key to improving weather and seasonal prediction skills. With the upgrades in land model physics (e.g., release of newer versions of a land model), different land initializations, changes in parameterization schemes used in the land model (e.g., land physical parametrization options), and how the land impact is handled (e.g., physics ensemble approach), it always prompts the necessity that climate prediction experiments need to be re-conducted to examine its impact. The current NASA LIS (version 7) integrates NOAA operational land surface and hydrological models (NCEP's Noah, versions from 2.7.1 to 3.6 and the future Noah-MP), high-resolution satellite and observational data, and land DA tools. The newer versions of the Noah LSM used in operational models have a variety of enhancements compared to older versions, where the Noah-MP allows for different physics parameterization options and the choice could have large impact on physical processes underlying seasonal predictions. These impacts need to be reexamined before implemented into NCEP operational systems. A set of offline numerical experiments driven by the GFS forecast forcing have been conducted to evaluate the impact of snow modeling with daily Global Historical Climatology Network (GHCN).
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, X.; Klein, S. A.; Ma, H. -Y.
The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less
NASA Astrophysics Data System (ADS)
Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.
2018-04-01
Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.
Zheng, X.; Klein, S. A.; Ma, H. -Y.; ...
2017-08-24
The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less
NASA Astrophysics Data System (ADS)
Stevens, R. G.; Lonsdale, C. L.; Brock, C. A.; Reed, M. K.; Crawford, J. H.; Holloway, J. S.; Ryerson, T. B.; Huey, L. G.; Nowak, J. B.; Pierce, J. R.
2012-04-01
New-particle formation in the plumes of coal-fired power plants and other anthropogenic sulphur sources may be an important source of particles in the atmosphere. It remains unclear, however, how best to reproduce this formation in global and regional aerosol models with grid-box lengths that are 10s of kilometres and larger. The predictive power of these models is thus limited by the resultant uncertainties in aerosol size distributions. In this presentation, we focus on sub-grid sulphate aerosol processes within coal-fired power plant plumes: the sub-grid oxidation of SO2 with condensation of H2SO4 onto newly-formed and pre-existing particles. Based on the results of the System for Atmospheric Modelling (SAM), a Large-Eddy Simulation/Cloud-Resolving Model (LES/CRM) with online TwO Moment Aerosol Sectional (TOMAS) microphysics, we develop a computationally efficient, but physically based, parameterization that predicts the characteristics of aerosol formed within coal-fired power plant plumes based on parameters commonly available in global and regional-scale models. Given large-scale mean meteorological parameters, emissions from the power plant, mean background condensation sink, and the desired distance from the source, the parameterization will predict the fraction of the emitted SO2 that is oxidized to H2SO4, the fraction of that H2SO4 that forms new particles instead of condensing onto preexisting particles, the median diameter of the newly-formed particles, and the number of newly-formed particles per kilogram SO2 emitted. We perform a sensitivity analysis of these characteristics of the aerosol size distribution to the meteorological parameters, the condensation sink, and the emissions. In general, new-particle formation and growth is greatly reduced during polluted conditions due to the large preexisting aerosol surface area for H2SO4 condensation and particle coagulation. The new-particle formation and growth rates are also a strong function of the amount of sunlight and NOx since both control OH concentrations. Decreases in NOx emissions without simultaneous decreases in SO2 emissions increase new-particle formation and growth due to increased oxidation of SO2. The parameterization we describe here should allow for more accurate predictions of aerosol size distributions and a greater confidence in the effects of aerosols in climate and health studies.
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Universal Parameterization of Absorption Cross Sections
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Cucinotta, Francis A.; Wilson, John W.
1997-01-01
This paper presents a simple universal parameterization of total reaction cross sections for any system of colliding nuclei that is valid for the entire energy range from a few AMeV to a few AGeV. The universal picture presented here treats proton-nucleus collision as a special case of nucleus-nucleus collision, where the projectile has charge and mass number of one. The parameters are associated with the physics of the collision system. In general terms, Coulomb interaction modifies cross sections at lower energies, and the effects of Pauli blocking are important at higher energies. The agreement between the calculated and experimental data is better than all earlier published results.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Testing a common ice-ocean parameterization with laboratory experiments
NASA Astrophysics Data System (ADS)
McConnochie, C. D.; Kerr, R. C.
2017-07-01
Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.
Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K
2009-03-01
The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
A scheme for parameterizing ice cloud water content in general circulation models
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Donner, Leo J.
1989-01-01
A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.
NASA Astrophysics Data System (ADS)
Schwartz, M. Christian
2017-08-01
This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the comparison between the two is nevertheless more favorable. It is concluded that the older datasets continue to be useful for scientific purposes, with certain caveats, and that continuing field investigations of cirrus with more modern probes is desirable.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Potentialities of ensemble strategies for flood forecasting over the Milano urban area
NASA Astrophysics Data System (ADS)
Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Homar, Víctor; Romero, Romu; Lombardi, Gabriele; Mancini, Marco
2016-08-01
Analysis of ensemble forecasting strategies, which can provide a tangible backing for flood early warning procedures and mitigation measures over the Mediterranean region, is one of the fundamental motivations of the international HyMeX programme. Here, we examine two severe hydrometeorological episodes that affected the Milano urban area and for which the complex flood protection system of the city did not completely succeed. Indeed, flood damage have exponentially increased during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. First, we examine how land-use changes due to urban development have altered the hydrological response to intense rainfalls. Second, we test a flood forecasting system which comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models. Accurate forecasts of deep moist convection and extreme precipitation are difficult to be predicted due to uncertainties arising from the numeric weather prediction (NWP) physical parameterizations and high sensitivity to misrepresentation of the atmospheric state; however, two hydrological ensemble prediction systems (HEPS) have been designed to explicitly cope with uncertainties in the initial and lateral boundary conditions (IC/LBCs) and physical parameterizations of the NWP model. No substantial differences in skill have been found between both ensemble strategies when considering an enhanced diversity of IC/LBCs for the perturbed initial conditions ensemble. Furthermore, no additional benefits have been found by considering more frequent LBCs in a mixed physics ensemble, as ensemble spread seems to be reduced. These findings could help to design the most appropriate ensemble strategies before these hydrometeorological extremes, given the computational cost of running such advanced HEPSs for operational purposes.
NASA Astrophysics Data System (ADS)
Kuppel, S.; Soulsby, C.; Maneta, M. P.; Tetzlaff, D.
2017-12-01
The utility of field measurements to help constrain the model solution space and identify feasible model configurations has been an increasingly central issue in hydrological model calibration. Sufficiently informative observations are necessary to ensure that the goodness of model-data fit attained effectively translates into more physically-sound information for the internal model parameters, as a basis for model structure evaluation. Here we assess to which extent the diversity of information content can inform on the suitability of a complex, process-based ecohydrological model to simulate key water flux and storage dynamics at a long-term research catchment in the Scottish Highlands. We use the fully-distributed ecohydrological model EcH2O, calibrated against long-term datasets that encompass hydrologic and energy exchanges and ecological measurements: stream discharge, soil moisture, net radiation above canopy, and pine stand transpiration. Diverse combinations of these constraints were applied using a multi-objective cost function specifically designed to avoid compensatory effects between model-data metrics. Results revealed that calibration against virtually all datasets enabled the model to reproduce streamflow reasonably well. However, parameterizing the model to adequately capture local flux and storage dynamics, such as soil moisture or transpiration, required calibration with specific observations. This indicates that the footprint of the information contained in observations varies for each type of dataset, and that a diverse database informing about the different compartments of the domain, is critical to test hypotheses of catchment function and identify a consistent model parameterization. The results foster confidence in using EcH2O to help understanding current and future ecohydrological couplings in Northern catchments.
Physics Parameterization for Seasonal Prediction
2012-09-30
comparison Project, a joint effort between the Year of Tropical Convection (YOTC) Program and the Global Energy and Water Cycle Experiment (GEWEX) Cloud...unified” representation of the water cycle in the model. One such area is the correspondence between diagnosed cloud cover and prognostic cloud
Modeling particle nucleation and growth over northern California during the 2010 CARES campaign
NASA Astrophysics Data System (ADS)
Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.
2015-11-01
Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.
NASA Astrophysics Data System (ADS)
Fangohr, Susanne; Woolf, David K.
2007-06-01
One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes independently. We then apply our parameterization to a monthly TOPEX altimeter gridded 1.5° × 1.5° data set and compare our results to transfer velocities calculated using the popular wind-based k parameterizations by Wanninkhof [Wanninkhof, R., 1992. Relationship between wind speed and gas exchange over the ocean. J. Geophys. Res., 97: 7373-7382.] and Wanninkhof and McGillis [Wanninkhof, R. and McGillis, W., 1999. A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13): 1889-1892]. We show that despite good agreement of the globally averaged transfer velocities, global and regional fluxes differ by up to 100%. These discrepancies are a result of different spatio-temporal distributions of the processes involved in the parameterizations of k, indicating the importance of wave field parameters and a need for further validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, R.B.; Tripoli, G.
1996-01-08
The authors developed single-column parameterizations for subgrid boundary-layer cumulus clouds. These give cloud onset time, cloud coverage, and ensemble distributions of cloud-base altitudes, cloud-top altitudes, cloud thickness, and the characteristics of cloudy and clear updrafts. They tested and refined the parameterizations against archived data from Spring and Summer 1994 and 1995 intensive operation periods (IOPs) at the Southern Great Plains (SGP) ARM CART site near Lamont, Oklahoma. The authors also found that: cloud-base altitudes are not uniform over a heterogeneous surface; tops of some cumulus clouds can be below the base-altitudes of other cumulus clouds; there is an overlap regionmore » near cloud base where clear and cloudy updrafts exist simultaneously; and the lognormal distribution of cloud sizes scales to the JFD of surface layer air and to the shape of the temperature profile above the boundary layer.« less
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.
2013-02-01
In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Remote Sensing Protocols for Parameterizing an Individual, Tree-Based, Forest Growth and Yield Model
2014-09-01
Leaf-Off Tree Crowns in Small Footprint, High Sampling Density LIDAR Data from Eastern Deciduous Forests in North America.” Remote Sensing of...William A. 2003. “Crown-Diameter Prediction Models for 87 Species of Stand- Grown Trees in the Eastern United States.” Southern Journal of Applied...ER D C/ CE RL T R- 14 -1 8 Base Facilities Environmental Quality Remote Sensing Protocols for Parameterizing an Individual, Tree -Based
NASA Technical Reports Server (NTRS)
Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.
2017-01-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
NASA Astrophysics Data System (ADS)
Singh Pradhan, Ananta Man; Kang, Hyo-Sub; Kim, Yun-Tae
2016-04-01
This study uses a physically based approach to evaluate the factor of safety of the hillslope for different hydrological conditions, in Mt Umyeon, south of Seoul. The hydrological conditions were determined using intensity and duration of whole Korea of known landslide inventory data. Quantile regression statistical method was used to ascertain different probability warning levels on the basis of rainfall thresholds. Physically based models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical probabilistic methods can include other causative factors which influence the slope stability such as forest, soil and geology, but rely on good landslide inventories of the site. In this study a hybrid approach has described that combines the physically-based landslide susceptibility for different hydrological conditions. A presence-only based maximum entropy model was used to hybrid and analyze relation of landslide with conditioning factors. About 80% of the landslides were listed among the unstable sites identified in the proposed model, thereby presenting its effectiveness and accuracy in determining unstable areas and areas that require evacuation. These cumulative rainfall thresholds provide a valuable reference to guide disaster prevention authorities in the issuance of warning levels with the ability to reduce losses and save lives.
Modelling the pelagic nitrogen cycle and vertical particle flux in the Norwegian sea
NASA Astrophysics Data System (ADS)
Haupt, Olaf J.; Wolf, Uli; v. Bodungen, Bodo
1999-02-01
A 1D Eulerian ecosystem model (BIological Ocean Model) for the Norwegian Sea was developed to investigate the dynamics of pelagic ecosystems. The BIOM combines six biochemical compartments and simulates the annual nitrogen cycle with specific focus on production, modification and sedimentation of particles in the water column. The external forcing and physical framework is based on a simulated annual cycle of global radiation and an annual mixed-layer cycle derived from field data. The vertical resolution of the model is given by an exponential grid with 200 depth layers, allowing specific parameterization of various sinking velocities, breakdown of particles and the remineralization processes. The aim of the numerical experiments is the simulation of ecosystem dynamics considering the specific biogeochemical properties of the Norwegian Sea, for example the life cycle of the dominant copepod Calanus finmarchicus. The results of the simulations were validated with field data. Model results are in good agreement with field data for the lower trophic levels of the food web. With increasing complexity of the organisms the differences increase between simulated processes and field data. Results of the numerical simulations suggest that BIOM is well adapted to investigate a physically controlled ecosystem. The simulation of grazing controlled pelagic ecosystems, like the Norwegian Sea, requires adaptations of parameterization to the specific ecosystem features. By using seasonally adaptation of the most sensible processes like utilization of light by phytoplankton and grazing by zooplankton results were greatly improved.
NASA Technical Reports Server (NTRS)
Olson, William S.
1990-01-01
A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.
A multiscale strength model for tantalum over an extended range of strain rates
NASA Astrophysics Data System (ADS)
Barton, N. R.; Rhee, M.
2013-09-01
A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
Physically-Derived Dynamical Cores in Atmospheric General Circulation Models
NASA Technical Reports Server (NTRS)
Rood, Richard B.; Lin, Shian-Jiann
1999-01-01
The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.
Physically-Derived Dynamical Cores in Atmospheric General Circulation Models
NASA Technical Reports Server (NTRS)
Rood, Richard B.; Lin, Shian-Kiann
1999-01-01
The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
Physics Parameterization for Seasonal Prediction
2013-09-30
particularly the Madden Julian Oscillation (MJO). We are continuing our participation in the project “Vertical Structure and Diabatic Processes of...Results are shown for: a) TRMM rainfall, b) NAVGEM 20-year run submitted for the YOTC/GEWEX project “Vertical Structure and Diabatic Processes of the MJO
NASA Technical Reports Server (NTRS)
Arnold, Nathan; Barahona, Donifan; Achuthavarier, Deepthi
2017-01-01
Weather and climate models have long struggled to realistically simulate the Madden-Julian Oscillation (MJO). Here we present a significant improvement in MJO simulation in NASA's GEOS atmospheric model with the implementation of 2-moment microphysics and the UW shallow cumulus parameterization. Comparing ten-year runs (2007-2016) with the old (1mom) and updated (2mom+shlw) model physics, the updated model has increased intra-seasonal variance with increased coherence. Surface fluxes and OLR are found to vary more realistically with precipitation, and a moisture budget suggests that changes in rain reevaporation and the cloud longwave feedback help support heavy precipitation. Preliminary results also show improved MJO hindcast skill.
NASA Astrophysics Data System (ADS)
de Lavenne, Alban; Andréassian, Vazken
2018-03-01
This paper examines the hydrological impact of the seasonality of precipitation and maximum evaporation: seasonality is, after aridity, a second-order determinant of catchment water yield. Based on a data set of 171 French catchments (where aridity ranged between 0.2 and 1.2), we present a parameterization of three commonly-used water balance formulas (namely, Turc-Mezentsev, Tixeront-Fu and Oldekop formulas) to account for seasonality effects. We quantify the improvement of seasonality-based parameterization in terms of the reconstitution of both catchment streamflow and water yield. The significant improvement obtained (reduction of RMSE between 9 and 14% depending on the formula) demonstrates the importance of climate seasonality in the determination of long-term catchment water balance.
NASA Astrophysics Data System (ADS)
Sommer, Philipp; Kaplan, Jed
2016-04-01
Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.
NASA Astrophysics Data System (ADS)
Sahyoun, Maher; Wex, Heike; Gosewinkel, Ulrich; Šantl-Temkiv, Tina; Nielsen, Niels W.; Finster, Kai; Sørensen, Jens H.; Stratmann, Frank; Korsholm, Ulrik S.
2016-08-01
Bacterial ice-nucleating particles (INP) are present in the atmosphere and efficient in heterogeneous ice-nucleation at temperatures up to -2 °C in mixed-phase clouds. However, due to their low emission rates, their climatic impact was considered insignificant in previous modeling studies. In view of uncertainties about the actual atmospheric emission rates and concentrations of bacterial INP, it is important to re-investigate the threshold fraction of cloud droplets containing bacterial INP for a pronounced effect on ice-nucleation, by using a suitable parameterization that describes the ice-nucleation process by bacterial INP properly. Therefore, we compared two heterogeneous ice-nucleation rate parameterizations, denoted CH08 and HOO10 herein, both of which are based on classical-nucleation-theory and measurements, and use similar equations, but different parameters, to an empirical parameterization, denoted HAR13 herein, which considers implicitly the number of bacterial INP. All parameterizations were used to calculate the ice-nucleation probability offline. HAR13 and HOO10 were implemented and tested in a one-dimensional version of a weather-forecast-model in two meteorological cases. Ice-nucleation-probabilities based on HAR13 and CH08 were similar, in spite of their different derivation, and were higher than those based on HOO10. This study shows the importance of the method of parameterization and of the input variable, number of bacterial INP, for accurately assessing their role in meteorological and climatic processes.
Estimating Longwave Atmospheric Emissivity in the Canadian Rocky Mountains
NASA Astrophysics Data System (ADS)
Ebrahimi, S.; Marshall, S. J.
2014-12-01
Incoming longwave radiation is an important source of energy contributing to snow and glacier melt. However, estimating the incoming longwave radiation from the atmosphere is challenging due to the highly varying conditions of the atmosphere, especially cloudiness. We analyze the performance of some existing models included a physically-based clear-sky model by Brutsaert (1987) and two different empirical models for all-sky conditions (Lhomme and others, 2007; Herrero and Polo, 2012) at Haig Glacier in the Canadian Rocky Mountains. Models are based on relations between readily observed near-surface meteorological data, including temperature, vapor pressure, relative humidity, and estimates of shortwave radiation transmissivity (i.e., clear-sky or cloud-cover indices). This class of models generally requires solar radiation data in order to obtain a proxy for cloud conditions. This is not always available for distributed models of glacier melt, and can have high spatial variations in regions of complex topography, which likely do not reflect the more homogeneous atmospheric longwave emissions. We therefore test longwave radiation parameterizations as a function of near-surface humidity and temperature variables, based on automatic weather station data (half-hourly and mean daily values) from 2004 to 2012. Results from comparative analysis of different incoming longwave radiation parameterizations showed that the locally-calibrated model based on relative humidity and vapour pressure performs better than other published models. Performance is degraded but still better than standard cloud-index based models when we transfer the model to another site, roughly 900 km away, Kwadacha Glacier in the northern Canadian Rockies.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Verilog-A Device Models for Cryogenic Temperature Operation of Bulk Silicon CMOS Devices
NASA Technical Reports Server (NTRS)
Akturk, Akin; Potbhare, Siddharth; Goldsman, Neil; Holloway, Michael
2012-01-01
Verilog-A based cryogenic bulk CMOS (complementary metal oxide semiconductor) compact models are built for state-of-the-art silicon CMOS processes. These models accurately predict device operation at cryogenic temperatures down to 4 K. The models are compatible with commercial circuit simulators. The models extend the standard BSIM4 [Berkeley Short-channel IGFET (insulated-gate field-effect transistor ) Model] type compact models by re-parameterizing existing equations, as well as adding new equations that capture the physics of device operation at cryogenic temperatures. These models will allow circuit designers to create optimized, reliable, and robust circuits operating at cryogenic temperatures.
New class of control laws for robotic manipulators. I - Nonadaptive case. II - Adaptive case
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1988-01-01
A new class of exponentially stabilizing control laws for joint level control of robot arms is discussed. Closed-loop exponential stability has been demonstrated for both the set point and tracking control problems by a slight modification of the energy Lyapunov function and the use of a lemma which handles third-order terms in the Lyapunov function derivatives. In the second part, these control laws are adapted in a simple fashion to achieve asymptotically stable adaptive control. The analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and uses a parameterization based on physical (time-invariant) quantities.
Process-oriented Observational Metrics for CMIP6 Climate Model Assessments
NASA Astrophysics Data System (ADS)
Jiang, J. H.; Su, H.
2016-12-01
Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.
Anisotropic shear dispersion parameterization for ocean eddy transport
NASA Astrophysics Data System (ADS)
Reckinger, Scott; Fox-Kemper, Baylor
2015-11-01
The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
NASA Astrophysics Data System (ADS)
Silvers, L. G.; Stevens, B. B.; Mauritsen, T.; Marco, G. A.
2015-12-01
The characteristics of clouds in General Circulation Models (GCMs) need to be constrained in a consistent manner with theory, observations, and high resolution models (HRMs). One way forward is to base improvements of parameterizations on high resolution studies which resolve more of the important dynamical motions and allow for less parameterizations. This is difficult because of the numerous differences between GCMs and HRMs, both technical and theoretical. Century long simulations at resolutions of 20-250 km on a global domain are typical of GCMs while HRMs often simulate hours at resolutions of 0.1km-5km on domains the size of a single GCM grid cell. The recently developed mode ICON provides a flexible framework which allows many of these difficulties to be overcome. This study uses the ICON model to compute SST perturbation simulations on multiple domains in a state of Radiative Convective Equilibrium (RCE) with parameterized convection. The domains used range from roughly the size of Texas to nearly half of Earth's surface area. All simulations use a doubly periodic domain with an effective distance between cell centers of 13 km and are integrated to a state of statistical stationarity. The primary analysis examines the mean characteristics of the cloud related fields and the feedback parameter of the simulations. It is shown that the simulated atmosphere of a GCM in RCE is sufficiently similar across a range of domain sizes to justify the use of RCE to study both a GCM and a HRM on the same domain with the goal of improved constraints on the parameterized clouds. The simulated atmospheres are comparable to what could be expected at midday in a typical region of Earth's tropics under calm conditions. In particular, the differences between the domains are smaller than differences which result from choosing different physics schemes. Significant convective organization is present on all domain sizes with a relatively high subsidence fraction. Notwithstanding the overall qualitative similarities of the simulations, quantitative differences lead to a surprisingly large sensitivity of the feedback parameter. This range of the feedback parameter is more than a factor of two and is similar to the range of feedbacks which were obtained by the CMIP5 models.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Modeling particle nucleation and growth over northern California during the 2010 CARES campaign
NASA Astrophysics Data System (ADS)
Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.
2015-07-01
Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4 while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapors parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates were predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. Differences among the three simulations for the 40-100 nm particle diameter range are mostly associated with the timing of the peak total tendencies that shift the morning increase and afternoon decrease in particle number concentration by up to two hours. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ∼ 36 %.
2014-10-26
From the parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow...field-based method [7, 12] to generate adaptive and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline ...parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, Ru-yang; Li, Xiang-ru
2017-07-01
Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.
Modeling post-fire hydro-geomorphic recovery in the Waldo Canyon Fire
NASA Astrophysics Data System (ADS)
Kinoshita, Alicia; Nourbakhshbeidokhti, Samira; Chin, Anne
2016-04-01
Wildfire can have significant impacts on watershed hydrology and geomorphology by changing soil properties and removing vegetation, often increasing runoff and soil erosion and deposition, debris flows, and flooding. Watershed systems may take several years or longer to recover. During this time, post-fire channel changes have the potential to alter hydraulics that influence characteristics such as time of concentration and increase time to peak flow, flow capacity, and velocity. Using the case of the 2012 Waldo Canyon Fire in Colorado (USA), this research will leverage field-based surveys and terrestrial Light Detection and Ranging (LiDAR) data to parameterize KINEROS2 (KINematic runoff and EROSion), an event oriented, physically-based watershed runoff and erosion model. We will use the Automated Geospatial Watershed Assessment (AGWA) tool, which is a GIS-based hydrologic modeling tool that uses commonly available GIS data layers to parameterize, execute, and spatially visualize runoff and sediment yield for watersheds impacted by the Waldo Canyon Fire. Specifically, two models are developed, an unburned (Bear Creek) and burned (Williams) watershed. The models will simulate burn severity and treatment conditions. Field data will be used to validate the burned watersheds for pre- and post-fire changes in infiltration, runoff, peak flow, sediment yield, and sediment discharge. Spatial modeling will provide insight into post-fire patterns for varying treatment, burn severity, and climate scenarios. Results will also provide post-fire managers with improved hydro-geomorphic modeling and prediction tools for water resources management and mitigation efforts.
Parameterizing time in electronic health record studies.
Hripcsak, George; Albers, David J; Perotte, Adler
2015-07-01
Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work properly cited. For commercial re-use, please contact journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Prein, A. F.; Langhans, W.; Fosser, G.; Ferrone, A.; Ban, N.; Goergen, K.; Keller, M.; Tölle, M.; Gutjahr, O.; Feser, F.; Brisson, E.; Kollet, S. J.; Schmidli, J.; Van Lipzig, N. P. M.; Leung, L. R.
2015-12-01
Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. We aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.
Prein, Andreas F; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P M; Leung, Ruby
2015-06-01
Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.
NASA Astrophysics Data System (ADS)
Prein, Andreas F.; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P. M.; Leung, Ruby
2015-06-01
Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K.; Miller, David O.
2005-01-01
Clouds have a powerful influence on atmospheric radiative transfer and hence are crucial to understanding and interpreting the exchange of radiation between the Earth's surface, the atmosphere, and space. Because clouds are highly variable in space, time and physical makeup, it is important to be able to observe them in three dimensions (3-D) with sufficient resolution that the data can be used to generate and validate parameterizations of cloud fields at the resolution scale of global climate models (GCMs). Simulation of photon transport in three dimensionally inhomogeneous cloud fields show that spatial inhomogeneities tend to decrease cloud reflection and absorption and increase direct and diffuse transmission, Therefore it is an important task to characterize cloud spatial structures in three dimensions on the scale of GCM grid elements. In order to validate cloud parameterizations that represent the ensemble, or mean and variance of cloud properties within a GCM grid element, measurements of the parameters must be obtained on a much finer scale so that the statistics on those measurements are truly representative. High spatial sampling resolution is required, on the order of 1 km or less. Since the radiation fields respond almost instantaneously to changes in the cloud field, and clouds changes occur on scales of seconds and less when viewed on scales of approximately 100m, the temporal resolution of cloud properties should be measured and characterized on second time scales. GCM time steps are typically on the order of an hour, but in order to obtain sufficient statistical representations of cloud properties in the parameterizations that are used as model inputs, averaged values of cloud properties should be calculated on time scales on the order of 10-100 s. The Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE) provides exceptional temporal (100 ms) and spatial (30 m) resolution measurements of aerosol and cloud backscatter in three dimensions. HARLIE was used in a ground-based configuration in several recent field campaigns. Principal data products include aerosol backscatter profiles, boundary layer heights, entrainment zone thickness, cloud fraction as a function of altitude and horizontal wind vector profiles based on correlating the motions of clouds and aerosol structures across portions of the scan. Comparisons will be made between various cloud detecting instruments to develop a baseline performance metric.
NASA Astrophysics Data System (ADS)
Swenson, S. C.; Lawrence, D. M.
2011-11-01
One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.
NASA Astrophysics Data System (ADS)
Swenson, S. C.; Lawrence, D. M.
2012-11-01
One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.
Changes in physiological attributes of ponderosa pine from seedling to mature tree
Nancy E. Grulke; William A. Retzlaff
2001-01-01
Plant physiological models are generally parameterized from many different sources of data, including chamber experiments and plantations, from seedlings to mature trees. We obtained a comprehensive data set for a natural stand of ponderosa pine (Pinus ponderosa Laws.) and used these data to parameterize the physiologically based model, TREGRO....
High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006
2006-08-01
snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models
Current state of aerosol nucleation parameterizations for air-quality and climate modeling
NASA Astrophysics Data System (ADS)
Semeniuk, Kirill; Dastoor, Ashu
2018-04-01
Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.
Molecular Modeling of High-Temperature Oxidation of Refractory Borides
2008-02-01
generate the classical potential, we adopt the van Beest , Kramer and van Santen (BKS) parameterization for Si-O interactions, but fit B-O and Si-B...seminar at Department of Aerospace and Mechanical Engineering, University of Notre Dame, March 20, 2007. 6 Los Alamos National Lab Physics & Theoretical
This study considers the performance of 7 of the Weather Research and Forecast model boundary-layer (BL) parameterization schemes in a complex...schemes performed best. The surface parameters, planetary BL structure, and vertical profiles are important for US Army Research Laboratory
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the f...
R-parametrization and its role in classification of linear multivariable feedback systems
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.
1988-01-01
A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
NASA Astrophysics Data System (ADS)
Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.
2013-12-01
The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on coarse scale data. These results suggest that using fine resolution parameterizations can be used to improve meso-scale hydrological modeling in this region.
Aircraft Observations for Improved Physical Parameterization for Seasonal Prediction
2013-09-30
platform is ready for use in air-sea interaction research projects. RELATED PROJECTS None PUBLICATIONS Gerber H., G. Frick, S. Malinowski ... Malinowski , S. P., H. Gerber, I. Jen-LaPlante, M. K. Kopec, W. Kumala, K. Nurowska, P. Y. Chuang, K. E. Haman, D. D. Khelif, S. K. Krueger, and H. H. Jonsson...Haman, K. E., Kopec, M. K., Khelif, D., and Malinowski , S. P.: Modified ultrafast thermometer UFT-M and temperature measurements during Physics of
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
NASA Technical Reports Server (NTRS)
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2014-01-01
The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.
Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications
NASA Astrophysics Data System (ADS)
Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.
2017-12-01
Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-02-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-06-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Parameterized reduced order models from a single mesh using hyper-dual numbers
NASA Astrophysics Data System (ADS)
Brake, M. R. W.; Fike, J. A.; Topping, S. D.
2016-06-01
In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2006-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CFWs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1 998 and 1999). In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).
NASA Astrophysics Data System (ADS)
Khouider, B.; Majda, A.; Deng, Q.; Ravindran, A. M.
2015-12-01
Global climate models (GCMs) are large computer codes based on the discretization of the equations of atmospheric and oceanic motions coupled to various processes of transfer of heat, moisture and other constituents between land, atmosphere, and oceans. Because of computing power limitations, typical GCM grid resolution is on the order of 100 km and the effects of many physical processes, occurring on smaller scales, on the climate system are represented through various closure recipes known as parameterizations. The parameterization of convective motions and many processes associated with cumulus clouds such as the exchange of latent heat and cloud radiative forcing are believed to be behind much of uncertainty in GCMs. Based on a lattice particle interacting system, the stochastic multicloud model (SMCM) provide a novel and efficient representation of the unresolved variability in GCMs due to organized tropical convection and the cloud cover. It is widely recognized that stratiform heating contributes significantly to tropical rainfall and to the dynamics of tropical convective systems by inducing a front-to-rear tilt in the heating profile. Stratiform anvils forming in the wake of deep convection play a central role in the dynamics of tropical mesoscale convective systems. Here, aquaplanet simulations with a warm pool like surface forcing, based on a coarse-resolution GCM , of ˜170 km grid mesh, coupled with SMCM, are used to demonstrate the importance of stratiform heating for the organization of convection on planetary and intraseasonal scales. When some key model parameters are set to produce higher stratiform heating fractions, the model produces low-frequency and planetary-scale Madden Julian oscillation (MJO)-like wave disturbances while lower to moderate stratiform heating fractions yield mainly synoptic-scale convectively coupled Kelvin-like waves. Rooted from the stratiform instability, it is conjectured here that the strength and extent of stratiform downdrafts are key contributors to the scale selection of convective organizations perhaps with mechanisms that are in essence similar to those of mesoscale convective systems.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
Development of the physics driver in NOAA Environmental Modeling System (NEMS)
NASA Astrophysics Data System (ADS)
Lei, H.; Iredell, M.; Tripp, P.
2016-12-01
As a key component of the Next Generation Global Prediction System (NGGPS), a physics driver is developed in the NOAA Environmental Modeling System (NEMS) in order to facilitate the research, development, and transition to operations of innovations in atmospheric physical parameterizations. The physics driver connects the atmospheric dynamic core, the Common Community Physics Package and the other NEMS-based forecast components (land, ocean, sea ice, wave, and space weather). In current global forecasting system, the physics driver has incorporated major existing physics packages including radiation, surface physics, cloud and microphysics, ozone, and stochastic physics. The physics driver is also applicable to external physics packages. The structure adjustment in NEMS by separating the PHYS trunk is to create an open physics package pool. This open platform is beneficial to the enhancement of U.S. weather forecast ability. In addition, with the universal physics driver, the NEMS can also be used for specific functions by connecting external target physics packages through physics driver. The test of its function is to connect a physics dust-radiation model in the system. Then the modified system can be used for dust storm prediction and forecast. The physics driver is also developed into a standalone form. This is to facilitate the development works on physics packages. The developers can save instant fields of meteorology data and snapshots from the running system , and then used them as offline driving data fields to test the new individual physics modules or small modifications to current modules. This prevents the run of whole system for every test.
Kinematic and Microphysical Control of Lightning Flash Rate over Northern Alabama
NASA Technical Reports Server (NTRS)
Carey, Lawrence D.; Bain, Anthony L.; Matthee, Retha; Schultz, Christopher J.; Schultz, Elise V.; Deierling, Wiebke; Petersen, Walter A.
2015-01-01
The Deep Convective Clouds and Chemistry (DC3) experiment seeks to examine the relationship between deep convection and the production of nitrogen oxides (NO (sub x)) via lightning (LNO (sub x)). A critical step in estimating LNO (sub x) production in a cloud-resolving model (CRM) without explicit lightning is to estimate the flash rate from available model parameters that are statistically and physically correlated. As such, the objective of this study is to develop, improve and evaluate lightning flash rate parameterizations in a variety of meteorological environments and storm types using radar and lightning mapping array (LMA) observations taken over Northern Alabama from 2005-2012, including during DC3. UAH's Advanced Radar for Meteorological and Operational Research (ARMOR) and the Weather Surveillance Radar - 1988 Doppler (WSR 88D) located at Hytop (KHTX) comprises the dual-Doppler and polarimetric radar network, which has been in operation since 2004. The northern Alabama LMA (NA LMA) in conjunction with Vaisala's National Lightning Detection Network (NLDN) allow for a detailed depiction of total lightning during this period. This study will integrate ARMOR-KHTX dual Doppler/polarimetric radar and NA LMA lightning observations from past and ongoing studies, including the more recent DC3 results, over northern Alabama to form a large data set of 15-20 case days and over 20 individual storms, including both ordinary multicell and supercell convection. Several flash rate parameterizations will be developed and tested, including those based on 1) graupel/small hail volume; 2) graupel/small hail mass, and 3) convective updraft volume. Sensitivity of the flash rate parameterizations to storm intensity, storm morphology and environmental conditions will be explored.
Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)
Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...
2015-09-01
Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
Confronting Models with Data: The GEWEX Cloud Systems Study
NASA Technical Reports Server (NTRS)
Randall, David; Curry, Judith; Duynkerke, Peter; Krueger, Steven; Moncrieff, Mitchell; Ryan, Brian; Starr, David OC.; Miller, Martin; Rossow, William; Tselioudis, George
2002-01-01
The GEWEX Cloud System Study (GCSS; GEWEX is the Global Energy and Water Cycle Experiment) was organized to promote development of improved parameterizations of cloud systems for use in climate and numerical weather prediction models, with an emphasis on the climate applications. The strategy of GCSS is to use two distinct kinds of models to analyze and understand observations of the behavior of several different types of clouds systems. Cloud-system-resolving models (CSRMs) have high enough spatial and temporal resolutions to represent individual cloud elements, but cover a wide enough range of space and time scales to permit statistical analysis of simulated cloud systems. Results from CSRMs are compared with detailed observations, representing specific cases based on field experiments, and also with statistical composites obtained from satellite and meteorological analyses. Single-column models (SCMs) are the surgically extracted column physics of atmospheric general circulation models. SCMs are used to test cloud parameterizations in an un-coupled mode, by comparison with field data and statistical composites. In the original GCSS strategy, data is collected in various field programs and provided to the CSRM Community, which uses the data to "certify" the CSRMs as reliable tools for the simulation of particular cloud regimes, and then uses the CSRMs to develop parameterizations, which are provided to the GCM Community. We report here the results of a re-thinking of the scientific strategy of GCSS, which takes into account the practical issues that arise in confronting models with data. The main elements of the proposed new strategy are a more active role for the large-scale modeling community, and an explicit recognition of the importance of data integration.
NASA Astrophysics Data System (ADS)
Chen, M.; Lemon, C.; Sazykin, S. Y.; Wolf, R.; Anderson, P. C.
2016-12-01
Sub-Auroral Polarization Streams (SAPS), characterized by large subauroral E x B velocities that span from dusk to the early morning sector for high magnetic activity, result from strong magnetosphere-ionosphere coupling. We investigate how electron and ion precipitation and the ionospheric conductance affect the simulated development of the SAPS electric field for the 17 March 2013 storm. Our approach is to use the magnetically and electrically self-consistent Rice Convection Model - Equilibrium (RCM-E) of the inner magnetosphere to simulate the SAPS. We use parameterized rates of whistler-generated electron pitch-angle scattering from Orlova and Shprits [JGR, 2014] that depend on equatorial radial distance, magnetic activity (Kp), and magnetic local time (MLT) outside the simulated plasmasphere. Inside the plasmasphere, parameterized scattering rates due to hiss [Orlova et al., GRL, 2014] are used. Ions are scattered at a fraction of strong pitch-angle scattering where the fraction is scaled by epsilon, the ratio of the gyroradius to the field-line radius of curvature, when epsilon is greater than 0.1. The electron and proton contributions to the auroral conductance in the RCM-E are calculated using the empirical Robinson et al. [JGR, 1987] and Galand and Richmond [JGR, 2001] equations, respectively. The "background" ionospheric conductance is based on parameters from the International Reference Ionosphere [Bilitza and Reinisch, JASR, 2008] but modified to include the effect of specified ionospheric troughs. Parameterized simulations will aid in understanding the underlying physical process. We compare simulated precipitating particle energy flux and E x B velocities with DMSP observations where SAPS are observed during the 17 March 2013 storm. Analysis of discerpancies between the simulation results and data will aid us in assessing needed improvements in the model.
Arnautova, Yelena A; Abagyan, Ruben A; Totrov, Maxim
2011-02-01
We report the development of internal coordinate mechanics force field (ICMFF), new force field parameterized using a combination of experimental data for crystals of small molecules and quantum mechanics calculations. The main features of ICMFF include: (a) parameterization for the dielectric constant relevant to the condensed state (ε = 2) instead of vacuum, (b) an improved description of hydrogen-bond interactions using duplicate sets of van der Waals parameters for heavy atom-hydrogen interactions, and (c) improved backbone covalent geometry and energetics achieved using novel backbone torsional potentials and inclusion of the bond angles at the C(α) atoms into the internal variable set. The performance of ICMFF was evaluated through loop modeling simulations for 4-13 residue loops. ICMFF was combined with a solvent-accessible surface area solvation model optimized using a large set of loop decoys. Conformational sampling was carried out using the biased probability Monte Carlo method. Average/median backbone root-mean-square deviations of the lowest energy conformations from the native structures were 0.25/0.21 Å for four residues loops, 0.84/0.46 Å for eight residue loops, and 1.16/0.73 Å for 12 residue loops. To our knowledge, these results are significantly better than or comparable with those reported to date for any loop modeling method that does not take crystal packing into account. Moreover, the accuracy of our method is on par with the best previously reported results obtained considering the crystal environment. We attribute this success to the high accuracy of the new ICM force field achieved by meticulous parameterization, to the optimized solvent model, and the efficiency of the search method. © 2010 Wiley-Liss, Inc.
A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"
NASA Astrophysics Data System (ADS)
Jansen, Malte F.
2017-02-01
This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.
Final Technical Report for "Reducing tropical precipitation biases in CESM"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less
Atmospheric solar heating rate in the water vapor bands
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah
1986-01-01
The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.
A Vertically Resolved Planetary Boundary Layer
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1984-01-01
Increase of the vertical resolution of the GLAS Fourth Order General Circulation Model (GCM) near the Earth's surface and installation of a new package of parameterization schemes for subgrid-scale physical processes were sought so that the GLAS Model GCM will predict the resolved vertical structure of the planetary boundary layer (PBL) for all grid points.
Betatron motion with coupling of horizontal and vertical degrees of freedom
Lebedev, V. A.; Bogacz, S. A.
2010-10-21
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V. A.; Bogacz, S. A.
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
NASA Astrophysics Data System (ADS)
Sahyoun, Maher; Woetmann Nielsen, Niels; Havskov Sørensen, Jens; Finster, Kai; Bay Gosewinkel Karlson, Ulrich; Šantl-Temkiv, Tina; Smith Korsholm, Ulrik
2014-05-01
Bacteria, e.g. Pseudomonas syringae, have previously been found efficient in nucleating ice heterogeneously at temperatures close to -2°C in laboratory tests. Therefore, ice nucleation active (INA) bacteria may be involved in the formation of precipitation in mixed phase clouds, and could potentially influence weather and climate. Investigations into the impact of INA bacteria on climate have shown that emissions were too low to significantly impact the climate (Hoose et al., 2010). The goal of this study is to clarify the reason for finding the marginal impact on climate when INA bacteria were considered, by investigating the usability of ice nucleation rate parameterization based on classical nucleation theory (CNT). For this purpose, two parameterizations of heterogeneous ice nucleation were compared. Both parameterizations were implemented and tested in a 1-d version of the operational weather model (HIRLAM) (Lynch et al., 2000; Unden et al., 2002) in two different meteorological cases. The first parameterization is based on CNT and denoted CH08 (Chen et al., 2008). This parameterization is a function of temperature and the size of the IN. The second parameterization, denoted HAR13, was derived from nucleation measurements of SnomaxTM (Hartmann et al., 2013). It is a function of temperature and the number of protein complexes on the outer membranes of the cell. The fraction of cloud droplets containing each type of IN as percentage in the cloud droplets population were used and the sensitivity of cloud ice production in each parameterization was compared. In this study, HAR13 produces more cloud ice and precipitation than CH08 when the bacteria fraction increases. In CH08, the increase of the bacteria fraction leads to decreasing the cloud ice mixing ratio. The ice production using HAR13 was found to be more sensitive to the change of the bacterial fraction than CH08 which did not show a similar sensitivity. As a result, this may explain the marginal impact of IN bacteria in climate models when CH08 was used. The number of cell fragments containing proteins appears to be a more important parameter to consider than the size of the cell when parameterizing the heterogeneous freezing of bacteria.
Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
NASA Astrophysics Data System (ADS)
Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.
2016-06-01
The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.
FastSim: A Fast Simulation for the SuperB Detector
NASA Astrophysics Data System (ADS)
Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.
2011-12-01
We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.
LAMPS software and mesoscale prediction studies
NASA Technical Reports Server (NTRS)
Perkey, D. J.
1985-01-01
The full-physics version of the LAMPS model has been implemented on the Perkin-Elmer computer. In Addition, LAMPS graphics processors have been rewritten to the run on the Perkin-Elmer and they are currently undergoing final testing. Numerical experiments investigating the impact of convective parameterized latent heat release on the evolution of a precipitating storm have been performed and the results are currently being evaluated. Curent efforts include the continued evaluation of the impact of initial conditions on LAMPS model results. This work will help define measurement requirements for future research field projects as well as for observations in support of operational forecasts. Also, the impact of parameterized latent heat on the evolution of precipitating systems is continuing. This research is in support of NASA's proposed Earth Observation Mission (EOM).
GEWEX Cloud Systems Study (GCSS)
NASA Technical Reports Server (NTRS)
Moncrieff, Mitch
1993-01-01
The Global Energy and Water Cycle Experiment (GEWEX) Cloud Systems Study (GCSS) program seeks to improve the physical understanding of sub-grid scale cloud processes and their representation in parameterization schemes. By improving the description and understanding of key cloud system processes, GCSS aims to develop the necessary parameterizations in climate and numerical weather prediction (NWP) models. GCSS will address these issues mainly through the development and use of cloud-resolving or cumulus ensemble models to generate realizations of a set of archetypal cloud systems. The focus of GCSS is on mesoscale cloud systems, including precipitating convectively-driven cloud systems like MCS's and boundary layer clouds, rather than individual clouds, and on their large-scale effects. Some of the key scientific issues confronting GCSS that particularly relate to research activities in the central U.S. are presented.
New Features in the Computational Infrastructure for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Smith, M. S.; Lingerfelt, E. J.; Scott, J. P.; Hix, W. R.; Nesaraja, C. D.; Koura, H.; Roberts, L. F.
2006-04-01
The Computational Infrastructure for Nuclear Astrophysics is a suite of computer codes online at nucastrodata.org that streamlines the incorporation of recent nuclear physics results into astrophysical simulations. The freely-available, cross- platform suite enables users to upload cross sections and s-factors, convert them into reaction rates, parameterize the rates, store the rates in customizable libraries, setup and run custom post-processing element synthesis calculations, and visualize the results. New features include the ability for users to comment on rates or libraries using an email-type interface, a nuclear mass model evaluator, enhanced techniques for rate parameterization, better treatment of rate inverses, and creation and exporting of custom animations of simulation results. We also have online animations of r- process, rp-process, and neutrino-p process element synthesis occurring in stellar explosions.
NASA Astrophysics Data System (ADS)
Salimun, Ester; Tangang, Fredolin; Juneng, Liew
2010-06-01
A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
NASA Astrophysics Data System (ADS)
Zhang, Qi; Chang, Ming; Zhou, Shengzhen; Chen, Weihua; Wang, Xuemei; Liao, Wenhui; Dai, Jianing; Wu, ZhiYong
2017-11-01
There has been a rapid growth of reactive nitrogen (Nr) deposition over the world in the past decades. The Pearl River Delta region is one of the areas with high loading of nitrogen deposition. But there are still large uncertainties in the study of dry deposition because of its complex processes of physical chemistry and vegetation physiology. At present, the forest canopy parameterization scheme used in WRF-Chem model is a single-layer "big leaf" model, and the simulation of radiation transmission and energy balance in forest canopy is not detailed and accurate. Noah-MP land surface model (Noah-MP) is based on the Noah land surface model (Noah LSM) and has multiple parametric options to simulate the energy, momentum, and material interactions of the vegetation-soil-atmosphere system. Therefore, to investigate the improvement of the simulation results of WRF-Chem on the nitrogen deposition in forest area after coupled with Noah-MP model and to reduce the influence of meteorological simulation biases on the dry deposition velocity simulation, a dry deposition single-point model coupled by Noah- MP and the WRF-Chem dry deposition module (WDDM) was used to simulate the deposition velocity (Vd). The model was driven by the micro-meteorological observation of the Dinghushan Forest Ecosystem Location Station. And a series of numerical experiments were carried out to identify the key processes influencing the calculation of dry deposition velocity, and the effects of various surface physical and plant physiological processes on dry deposition were discussed. The model captured the observed Vd well, but still underestimated the Vd. The self-defect of Wesely scheme applied by WDDM, and the inaccuracy of built-in parameters in WDDM and input data for Noah-MP (e.g. LAI) were the key factors that cause the underestimation of Vd. Therefore, future work is needed to improve model mechanisms and parameterization.
NASA Astrophysics Data System (ADS)
Molod, A.; Salmun, H.; Collow, A.
2017-12-01
The atmospheric general circulation model (GCM) that underlies the MERRA-2 reanalysis includesa suite of physical parameterizations that describe the processes that occur in theplanetary boundary layer (PBL). The data assimilation system assures that the atmosphericstate variables used as input to these parameterizations are constrained to the bestfit to all of the available observations. Many studies, however, have shown that the GCM-based estimates of MERRA-2 PBL heights are biased high, and so are not reliable forapplication related to constituent transport or the carbon cycle.A new 20-year record of PBL heights was derived from Wind Profiler (WP) backscatter data measuredat a wide network of stations throughout the US Great Plains and has been validated against independent estimates. The behavior of these PBL heights shows geographical and temporalvariations that are difficult to attribute to particular physical processes withoutadditional information that are not part of the observational record.In the present study, we use information on physical processes from MERRA-2 to understand the behavior of the WP derived PBL heights. The behavior of the annual cycle of both MERRA-2 and WP PBL heights shows three classes of behavior: (i) canonical, where the annual cyclefollows the annual cycle of the sun, (ii) delayed, where the PBL height reaches its annual maximum after the annual maximum of the solar insolation, and (iii) double maxima, wherethe PBL height begins to rise with the solar insolation but falls sometimes during the the summer and then rises again. Although the magnitude of these types of variations isdescribed by the WP PBL record, the explanation for these behaviors and the relationshipto local precipitation, temperature, hydrology and sensible and latent heat fluxes is articulated using information from MERRA-2.
Toward a Physical Characterization of Raindrop Collision Outcome Regimes
NASA Technical Reports Server (NTRS)
Testik, F. Y.; Barros, Ana P.; Bilven, Francis L.
2011-01-01
A comprehensive raindrop collision outcome regime diagram that delineates the physical conditions associated with the outcome regimes (i.e., bounce, coalescence, and different breakup types) of binary raindrop collisions is proposed. The proposed diagram builds on a theoretical regime diagram defined in the phase space of collision Weber numbers We and the drop diameter ratio p by including critical angle of impact considerations. In this study, the theoretical regime diagram is first evaluated against a comprehensive dataset for drop collision experiments representative of raindrop collisions in nature. Subsequently, the theoretical regime diagram is modified to explicitly describe the dominant regimes of raindrop interactions in (We, p) by delineating the physical conditions necessary for the occurrence of distinct types of collision-induced breakup (neck/filament, sheet, disk, and crown breakups) based on critical angle of impact consideration. Crown breakup is a subtype of disk breakup for lower collision kinetic energy that presents distinctive morphology. Finally, the experimental results are analyzed in the context of the comprehensive collision regime diagram, and conditional probabilities that can be used in the parameterization of breakup kernels in stochastic models of raindrop dynamics are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less
Analytical tools in accelerator physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litvinenko, V.N.
2010-09-01
This paper is a sub-set of my lectures presented in the Accelerator Physics course (USPAS, Santa Rosa, California, January 14-25, 2008). It is based on my notes I wrote during period from 1976 to 1979 in Novosibirsk. Only few copies (in Russian) were distributed to my colleagues in Novosibirsk Institute of Nuclear Physics. The goal of these notes is a complete description starting from the arbitrary reference orbit, explicit expressions for 4-potential and accelerator Hamiltonian and finishing with parameterization with action and angle variables. To a large degree follow logic developed in Theory of Cyclic Particle Accelerators by A.A.Kolmensky andmore » A.N.Lebedev [Kolomensky], but going beyond the book in a number of directions. One of unusual feature is these notes use of matrix function and Sylvester formula for calculating matrices of arbitrary elements. Teaching the USPAS course motivated me to translate significant part of my notes into the English. I also included some introductory materials following Classical Theory of Fields by L.D. Landau and E.M. Liftsitz [Landau]. A large number of short notes covering various techniques are placed in the Appendices.« less
NASA Astrophysics Data System (ADS)
Anber, U.; Wang, S.; Gentine, P.; Jensen, M. P.
2017-12-01
A framework is introduced to investigate the indirect impact of aerosol loading on tropical deep convection using 3-dimentional idealized cloud-system resolving simulations with coupled large-scale circulation. The large scale dynamics is parameterized using a spectral weak temperature gradient approximation that utilizes the dominant balance in the tropics between adiabatic cooling and diabatic heating. Aerosol loading effect is examined by varying the number concentration of nuclei (CCN) to form cloud droplets in the bulk microphysics scheme over a wide range from 30 to 5000 without including any radiative effect as the radiative cooling is prescribed at a constant rate, to isolate the microphysical effect. Increasing aerosol number concentration causes mean precipitation to decrease monotonically, despite the increase in cloud condensates. Such reduction in precipitation efficiency is attributed to reduction in the surface enthalpy fluxes, and not to the divergent circulation, as the gross moist stability remains unchanged. We drive a simple scaling argument based on the moist static energy budget, that enables a direct estimation of changes in precipitation given known changes in surfaces enthalpy fluxes and the constant gross moist stability. The impact on cloud hydrometers and microphysical properties is also examined and is consistent with the macro-physical picture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Chao-Jun; Li, Xin-Zhou, E-mail: fengcj@shnu.edu.cn, E-mail: kychz@shnu.edu.cn
To probe the late evolution history of the universe, we adopt two kinds of optimal basis systems. One of them is constructed by performing the principle component analysis, and the other is built by taking the multidimensional scaling approach. Cosmological observables such as the luminosity distance can be decomposed into these basis systems. These basis systems are optimized for different kinds of cosmological models that are based on different physical assumptions, even for a mixture model of them. Therefore, the so-called feature space that is projected from the basis systems is cosmological model independent, and it provides a parameterization for studying and reconstructing themore » Hubble expansion rate from the supernova luminosity distance and even gamma-ray burst (GRB) data with self-calibration. The circular problem when using GRBs as cosmological candles is naturally eliminated in this procedure. By using the Levenberg–Marquardt technique and the Markov Chain Monte Carlo method, we perform an observational constraint on this kind of parameterization. The data we used include the “joint light-curve analysis” data set that consists of 740 Type Ia supernovae and 109 long GRBs with the well-known Amati relation.« less
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Kooperman, G. J.; Zhao, Z.; Wang, M.; Russell, L. M.; Somerville, R. C.; Ghan, S. J.
2011-12-01
Evaluating the fidelity of new aerosol physics in climate models is confounded by uncertainties in source emissions, systematic error in cloud parameterizations, and inadequate sampling of long-range plume concentrations. To explore the degree to which cloud parameterizations distort aerosol processing and scavenging, the Pacific Northwest National Laboratory (PNNL) Aerosol-Enabled Multi-Scale Modeling Framework (AE-MMF), a superparameterized branch of the Community Atmosphere Model Version 5 (CAM5), is applied to represent the unusually active and well sampled North American wildfire season in 2004. In the AE-MMF approach, the evolution of double moment aerosols in the exterior global resolved scale is linked explicitly to convective statistics harvested from an interior cloud resolving scale. The model is configured in retroactive nudged mode to observationally constrain synoptic meteorology, and Arctic wildfire activity is prescribed at high space/time resolution using data from the Global Fire Emissions Database. Comparisons against standard CAM5 bracket the effect of superparameterization to isolate the role of capturing rainfall intermittency on the bulk characteristics of 2004 Arctic plume transport. Ground based lidar and in situ aircraft wildfire plume constraints from the International Consortium for Atmospheric Research on Transport and Transformation field campaign are used as a baseline for model evaluation.
Warren, Jeffrey M; Hanson, Paul J; Iversen, Colleen M; Kumar, Jitendra; Walker, Anthony P; Wullschleger, Stan D
2015-01-01
There is wide breadth of root function within ecosystems that should be considered when modeling the terrestrial biosphere. Root structure and function are closely associated with control of plant water and nutrient uptake from the soil, plant carbon (C) assimilation, partitioning and release to the soils, and control of biogeochemical cycles through interactions within the rhizosphere. Root function is extremely dynamic and dependent on internal plant signals, root traits and morphology, and the physical, chemical and biotic soil environment. While plant roots have significant structural and functional plasticity to changing environmental conditions, their dynamics are noticeably absent from the land component of process-based Earth system models used to simulate global biogeochemical cycling. Their dynamic representation in large-scale models should improve model veracity. Here, we describe current root inclusion in models across scales, ranging from mechanistic processes of single roots to parameterized root processes operating at the landscape scale. With this foundation we discuss how existing and future root functional knowledge, new data compilation efforts, and novel modeling platforms can be leveraged to enhance root functionality in large-scale terrestrial biosphere models by improving parameterization within models, and introducing new components such as dynamic root distribution and root functional traits linked to resource extraction. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Roningen, J. M.; Eylander, J. B.
2014-12-01
Groundwater use and management is subject to economic, legal, technical, and informational constraints and incentives at a variety of spatial and temporal scales. Planned and de facto management practices influenced by tax structures, legal frameworks, and agricultural and trade policies that vary at the country scale may have medium- and long-term effects on the ability of a region to support current and projected agricultural and industrial development. USACE is working to explore and develop global-scale, physically-based frameworks to serve as a baseline for hydrologic policy comparisons and consequence assessment, and such frameworks must include a reasonable representation of groundwater systems. To this end, we demonstrate the effects of different subsurface parameterizations, scaling, and meteorological forcings on surface and subsurface components of the Catchment Land Surface Model Fortuna v2.5 (Koster et al. 2000). We use the Land Information System 7 (Kumar et al. 2006) to process model runs using meteorological components of the Air Force Weather Agency's AGRMET forcing data from 2006 through 2011. Seasonal patterns and trends are examined in areas of the Upper Nile basin, northern China, and the Mississippi Valley. We also discuss the relevance of the model's representation of the catchment deficit with respect to local hydrogeologic structures.
NASA Technical Reports Server (NTRS)
Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.
1993-01-01
Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.
Parameterization of planetary wave breaking in the middle atmosphere
NASA Technical Reports Server (NTRS)
Garcia, Rolando R.
1991-01-01
A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.
NASA Astrophysics Data System (ADS)
Sivandran, G.; Bisht, G.; Ivanov, V. Y.; Bras, R. L.
2008-12-01
A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was applied to the semiarid Walnut Gulch Experimental Watershed in Arizona. The physically-based, distributed nature of the coupled model allows for parameterization and simulation of watershed vegetation-water-energy dynamics on timescales varying from hourly to interannual. The model also allows for explicit spatial representation of processes that vary due to complex topography, such as lateral redistribution of moisture and partitioning of radiation with respect to aspect and slope. Model parameterization and forcing was conducted using readily available databases for topography, soil types, and land use cover as well as the data from network of meteorological stations located within the Walnut Gulch watershed. In order to test the performance of the model, three sets of simulations were conducted over an 11 year period from 1997 to 2007. Two simulations focus on heavily instrumented nested watersheds within the Walnut Gulch basin; (i) Kendall watershed, which is dominated by annual grasses; and (ii) Lucky Hills watershed, which is dominated by a mixture of deciduous and evergreen shrubs. The third set of simulations cover the entire Walnut Gulch Watershed. Model validation and performance were evaluated in relation to three broad categories; (i) energy balance components: the network of meteorological stations were used to validate the key energy fluxes; (ii) water balance components: the network of flumes, rain gauges and soil moisture stations installed within the watershed were utilized to validate the manner in which the model partitions moisture; and (iii) vegetation dynamics: remote sensing products from MODIS were used to validate spatial and temporal vegetation dynamics. Model results demonstrate satisfactory spatial and temporal agreement with observed data, giving confidence that key ecohydrological processes can be adequately represented for future applications of tRIBS+VEGGIE in regional modeling of land-atmosphere interactions.
NASA Astrophysics Data System (ADS)
Knippertz, Peter; Marsham, John H.; Cowie, Sophie; Fiedler, Stephanie; Heinold, Bernd; Jemmett-Smith, Bradley; Pantillon, Florian; Schepanski, Kerstin; Roberts, Alexander; Pope, Richard; Gilkeson, Carl; Hubel, Eva
2016-04-01
Mineral dust plays an important role in the Earth system, but a reliable quantification of the global dust budget is still not possible due to a lack of observations and insufficient representation of relevant processes in climate and weather models. Five years ago, the Desert Storms project funded by the European Research Council set out to reduce these uncertainties. Its aims were to (1) improve the understanding of key meteorological mechanisms of peak wind generation in dust emission regions (particularly in northern Africa), (2) assess their relative importance, (3) evaluate their representation in models, (4) determine model sensitivities with respect to resolution and model physics, and (5) explore the usefulness of new approaches for model improvements. Here we give an overview of the most significant findings: (1) The morning breakdown of nocturnal low-level jets is an important emission mechanism, but details depend crucially on nighttime stability, which is often badly handled by models. (2) Convective cold pools are a key control on summertime dust emission over northern Africa, directly and through their influence on the heat low; they are severely misrepresented by models using parameterized convection. A new scheme based on downdraft mass flux has been developed that can mitigate this problem. (3) Mobile cyclones make a relatively unimportant contribution, except for northeastern Africa in spring. (4) A new global climatology of dust devils identifies local hotspots but suggests a minor contribution to the global dust budget in contrast to previous studies. A new dust-devil parameterization based on data from large-eddy simulations will be presented. (5) The lack of sufficient observations and misrepresentation of physical processes lead to a considerable uncertainty and biases in (re)analysis products. (6) Variations in vegetation-related surface roughness create small-scale wind variability and support long-term dust trends in semi-arid areas.
NASA Astrophysics Data System (ADS)
Knippertz, P.; Marsham, J. H.; Cowie, S. M.; Fiedler, S.; Heinold, B.; Jemmett-Smith, B. C.; Pantillon, F.; Schepanski, K.; Roberts, A. J.; Pope, R.; Gilkeson, C. A.; Hubel, E.
2015-12-01
Mineral dust plays an important role in the Earth system, but a reliable quantification of the global dust budget is still not possible due to a lack of observations and insufficient representation of relevant processes in climate and weather models. Five years ago, the Desert Storms project funded by the European Research Council set out to reduce these uncertainties. Its aims were to (1) improve the understanding of key meteorological mechanisms of peak wind generation in dust emission regions (particularly in northern Africa), (2) assess their relative importance, (3) evaluate their representation in models, (4) determine model sensitivities with respect to resolution and model physics, and (5) explore the usefulness of new approaches for model improvements. Here we give an overview of the most significant findings: (1) The morning breakdown of nocturnal low-level jets is an important emission mechanism, but details depend crucially on nighttime stability, which is often badly handled by models. (2) Convective cold pools are a key control on summertime dust emission over northern Africa, directly and through their influence on the heat low; they are severely misrepresented by models using parameterized convection. A new scheme based on downdraft mass flux has been developed that can mitigate this problem. (3) Mobile cyclones make a relatively unimportant contribution, except for northeastern Africa in spring. (4) A new global climatology of dust devils identifies local hotspots but suggests a minor contribution to the global dust budget in contrast to previous studies. A new dust-devil parameterization based on data from large-eddy simulations will be presented. (5) The lack of sufficient observations and misrepresentation of physical processes lead to a considerable uncertainty and biases in (re)analysis products. (6) Variations in vegetation-related surface roughness create small-scale wind variability and support long-term dust trends in semi-arid areas.
NASA Astrophysics Data System (ADS)
Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.
2014-12-01
In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra-guild trait variability amongst other parameters and are using this model to explore abiotic controls on community emergence and impact on rates of reactions that contribute to the cycling of carbon across biogeochemical gradients from the soil to the subsurface.
NASA Astrophysics Data System (ADS)
Johnson, M. T.
2010-10-01
The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.
Development of a land surface model with coupled snow and frozen soil physics
NASA Astrophysics Data System (ADS)
Wang, Lei; Zhou, Jing; Qi, Jia; Sun, Litao; Yang, Kun; Tian, Lide; Lin, Yanluan; Liu, Wenbin; Shrestha, Maheswor; Xue, Yongkang; Koike, Toshio; Ma, Yaoming; Li, Xiuping; Chen, Yingying; Chen, Deliang; Piao, Shilong; Lu, Hui
2017-06-01
Snow and frozen soil are important factors that influence terrestrial water and energy balances through snowpack accumulation and melt and soil freeze-thaw. In this study, a new land surface model (LSM) with coupled snow and frozen soil physics was developed based on a hydrologically improved LSM (HydroSiB2). First, an energy-balance-based three-layer snow model was incorporated into HydroSiB2 (hereafter HydroSiB2-S) to provide an improved description of the internal processes of the snow pack. Second, a universal and simplified soil model was coupled with HydroSiB2-S to depict soil water freezing and thawing (hereafter HydroSiB2-SF). In order to avoid the instability caused by the uncertainty in estimating water phase changes, enthalpy was adopted as a prognostic variable instead of snow/soil temperature in the energy balance equation of the snow/frozen soil module. The newly developed models were then carefully evaluated at two typical sites of the Tibetan Plateau (TP) (one snow covered and the other snow free, both with underlying frozen soil). At the snow-covered site in northeastern TP (DY), HydroSiB2-SF demonstrated significant improvements over HydroSiB2-F (same as HydroSiB2-SF but using the original single-layer snow module of HydroSiB2), showing the importance of snow internal processes in three-layer snow parameterization. At the snow-free site in southwestern TP (Ngari), HydroSiB2-SF reasonably simulated soil water phase changes while HydroSiB2-S did not, indicating the crucial role of frozen soil parameterization in depicting the soil thermal and water dynamics. Finally, HydroSiB2-SF proved to be capable of simulating upward moisture fluxes toward the freezing front from the underlying soil layers in winter.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2014-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were added to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2014-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were applied to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.
Pal, Sandip
2016-06-01
The convective boundary layer (CBL) turbulence is the key process for exchanging heat, momentum, moisture and trace gases between the earth's surface and the lower part of the troposphere. The turbulence parameterization of the CBL is a challenging but important component in numerical models. In particular, correct estimation of CBL turbulence features, parameterization, and the determination of the contribution of eddy diffusivity are important for simulating convection initiation, and the dispersion of health hazardous air pollutants and Greenhouse gases. In general, measurements of higher-order moments of water vapor mixing ratio (q) variability yield unique estimates of turbulence in the CBL. Using the high-resolution lidar-derived profiles of q variance, third-order moment, and skewness and analyzing concurrent profiles of vertical velocity, potential temperature, horizontal wind and time series of near-surface measurements of surface flux and meteorological parameters, a conceptual framework based on bottom up approach is proposed here for the first time for a robust characterization of the turbulent structure of CBL over land so that our understanding on the processes governing CBL q turbulence could be improved. Finally, principal component analyses will be applied on the lidar-derived long-term data sets of q turbulence statistics to identify the meteorological factors and the dominant physical mechanisms governing the CBL turbulence features. Copyright © 2016 Elsevier B.V. All rights reserved.
Sensitivity of CAM5-simulated Arctic clouds and radiation to ice nucleation parameterization
Xie, Shaocheng; Liu, Xiaohong; Zhao, Chuanfeng; ...
2013-08-06
Sensitivity of Arctic clouds and radiation in the Community Atmospheric Model, version 5, to the ice nucleation process is examined by testing a new physically based ice nucleation scheme that links the variation of ice nuclei (IN) number concentration to aerosol properties. The default scheme parameterizes the IN concentration simply as a function of ice supersaturation. The new scheme leads to a significant reduction in simulated IN concentration at all latitudes while changes in cloud amounts and properties are mainly seen at high- and midlatitude storm tracks. In the Arctic, there is a considerable increase in midlevel clouds and amore » decrease in low-level clouds, which result from the complex interaction among the cloud macrophysics, microphysics, and large-scale environment. The smaller IN concentrations result in an increase in liquid water path and a decrease in ice water path caused by the slowdown of the Bergeron–Findeisen process in mixed-phase clouds. Overall, there is an increase in the optical depth of Arctic clouds, which leads to a stronger cloud radiative forcing (net cooling) at the top of the atmosphere. The comparison with satellite data shows that the new scheme slightly improves low-level cloud simulations over most of the Arctic but produces too many midlevel clouds. Considerable improvements are seen in the simulated low-level clouds and their properties when compared with Arctic ground-based measurements. As a result, issues with the observations and the model–observation comparison in the Arctic region are discussed.« less
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
A general multiscroll Lorenz system family and its realization via digital signal processors.
Yu, Simin; Lü, Jinhu; Tang, Wallace K S; Chen, Guanrong
2006-09-01
This paper proposes a general multiscroll Lorenz system family by introducing a novel parameterized nth-order polynomial transformation. Some basic dynamical behaviors of this general multiscroll Lorenz system family are then investigated, including bifurcations, maximum Lyapunov exponents, and parameters regions. Furthermore, the general multiscroll Lorenz attractors are physically verified by using digital signal processors.
Shiyuan Zhong; Xiuping Li; Xindi Bian; Warren E. Heilman; L. Ruby Leung; William I. Jr. Gustafson
2012-01-01
The performance of regional climate simulations is evaluated for the Great Lakes region. Three 10-year (1990-1999) current-climate simulations are performed using the MM5 regional climate model (RCM) with 36-km horizontal resolution. The simulations employed identical configuration and physical parameterizations, but different lateral boundary conditions and sea-...
New Layer Thickness Parameterization of Diffusive Convection
NASA Astrophysics Data System (ADS)
Zhou, Sheng-Qi; Lu, Yuan-Zheng; Guo, Shuang-Xi; Song, Xue-Long; Qu, Ling; Cen, Xian-Rong; Fer, Ilker
2017-11-01
Double-diffusion convection is one of the most important non-mechanically driven mixing processes. Its importance has been particular recognized in oceanography, material science, geology, and planetary physics. Double-diffusion occurs in a fluid in which there are gradients of two (or more) properties with different molecular diffusivities and of opposing effects on the vertical density distribution. It has two primary modes: salt finger and diffusive convection. Recently, the importance of diffusive convection has aroused more interest due to its impact to the diapycnal mixing in the interior ocean and the ice and the ice-melting in the Arctic and Antarctic Oceans. In our recent work, we constructed a length scale of energy-containing eddy and proposed a new layer thickness parameterization of diffusive convection by using the laboratory experiment and in situ observations in the lakes and oceans. The new parameterization can well describe the laboratory convecting layer thicknesses (0.01 0.1 m) and those observed in oceans and lakes (0.1 1000 m). This work was supported by China NSF Grants (41476167,41406035 and 41176027), NSF of Guangdong Province, China (2016A030311042) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA11030302).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, Juan A.; Chen, Qingshan; Ringler, Todd
Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less
Simulation of the Atmospheric Boundary Layer for Wind Energy Applications
NASA Astrophysics Data System (ADS)
Marjanovic, Nikola
Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).
Development and evaluation of a physics-based windblown ...
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of this scheme, however, is the incorporation of a newly developed dynamic relation for the surface roughness length relevant to small-scale dust generation processes. Through this implementation, the effect of nonerodible elements on the local flow acceleration, drag partitioning, and surface coverage protection is modeled in a physically based and consistent manner. Careful attention is paid in integrating the new windblown dust treatment in the CMAQ model to ensure that the required input parameters are correctly configured. To test the performance of the new dust module in CMAQ, the entire year 2011 is simulated for the continental United States, with particular emphasis on the southwestern United States (SWUS) where windblown dust concentrations are relatively large. Overall, the model shows good performance with the daily mean bias of soil concentrations fluctuating in the range of ±1 µg m−3 for the entire year. Springtime soil concentrations are in quite good agreement (normalized mean bias of 8.3%) with observations, while moderate to high underestimation of soil concentration is seen in the summertime. The latter is attributed to the issue of representing the convective dust sto
NASA Astrophysics Data System (ADS)
Mitchell, D. L.
2006-12-01
Sometimes deep physical insights can be gained through the comparison of two theories of light scattering. Comparing van de Hulst's anomalous diffraction approximation (ADA) with Mie theory yielded insights on the behavior of the photon tunneling process that resulted in the modified anomalous diffraction approximation (MADA). (Tunneling is the process by which radiation just beyond a particle's physical cross-section may undergo large angle diffraction or absorption, contributing up to 40% of the absorption when wavelength and particle size are comparable.) Although this provided a means of parameterizing the tunneling process in terms of the real index of refraction and size parameter, it did not predict the efficiency of the tunneling process, where an efficiency of 100% is predicted for spheres by Mie theory. This tunneling efficiency, Tf, depends on particle shape and ranges from 0 to 1.0, with 1.0 corresponding to spheres. Similarly, by comparing absorption efficiencies predicted by the Finite Difference Time Domain Method (FDTD) with efficiencies predicted by MADA, Tf was determined for nine different ice particle shapes, including aggregates. This comparison confirmed that Tf is a strong function of ice crystal shape, including the aspect ratio when applicable. Tf was lowest (< 0.36) for aggregates and plates, and largest (> 0.9) for quasi- spherical shapes. A parameterization of Tf was developed in terms of (1) ice particle shape and (2) mean particle size regarding the large mode (D > 70 mm) of the ice particle size distribution. For the small mode, Tf is only a function of ice particle shape. When this Tf parameterization is used in MADA, absorption and extinction efficiency differences between MADA and FDTD are within 14% over the terrestrial wavelength range 3-100 mm for all size distributions and most crystal shapes likely to be found in cirrus clouds. Using hyperspectral radiances, it is demonstrated that Tf can be retrieved from ice clouds. Since Tf is a function of ice particle shape, this may provide a means of retrieving qualitative information on ice particle shape.
NASA Astrophysics Data System (ADS)
Hall, Carlton Raden
A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf thickness Ltadj, LAI, and h (m). Its function is to translate leaf level estimates of diffuse absorption and backscatter to the canopy scale allowing the leaf optical properties to directly influence above canopy estimates of reflectance. The model was successfully modified and parameterized to operate in a canopy scale and a leaf scale mode. Canopy scale model simulations produced the best results. Simulations based on leaf derived coefficients produced calculated above canopy reflectance errors of 15% to 18%. A comprehensive sensitivity analyses indicated the most important parameters were beam to diffuse conversion c(lambda, m-1), diffuse absorption a(lambda, m-1), diffuse backscatter b(lambda, m-1), h (m), Q, and direct and diffuse irradiance. Sources of error include the estimation procedure for the direct beam to diffuse conversion and attenuation coefficients and other field and laboratory measurement and analysis errors. Applications of the model include creation of synthetic reflectance data sets for remote sensing algorithm development, simulations of stress and drought on vegetation reflectance signatures, and the potential to estimate leaf moisture and chemical status.
Leaf chlorophyll constraint on model simulated gross primary productivity in agricultural systems
NASA Astrophysics Data System (ADS)
Houborg, Rasmus; McCabe, Matthew F.; Cescatti, Alessandro; Gitelson, Anatoly A.
2015-12-01
Leaf chlorophyll content (Chll) may serve as an observational proxy for the maximum rate of carboxylation (Vmax), which describes leaf photosynthetic capacity and represents the single most important control on modeled leaf photosynthesis within most Terrestrial Biosphere Models (TBMs). The parameterization of Vmax is associated with great uncertainty as it can vary significantly between plants and in response to changes in leaf nitrogen (N) availability, plant phenology and environmental conditions. Houborg et al. (2013) outlined a semi-mechanistic relationship between Vmax25 (Vmax normalized to 25 °C) and Chll based on inter-linkages between Vmax25, Rubisco enzyme kinetics, N and Chll. Here, these relationships are parameterized for a wider range of important agricultural crops and embedded within the leaf photosynthesis-conductance scheme of the Community Land Model (CLM), bypassing the questionable use of temporally invariant and broadly defined plant functional type (PFT) specific Vmax25 values. In this study, the new Chll constrained version of CLM is refined with an updated parameterization scheme for specific application to soybean and maize. The benefit of using in-situ measured and satellite retrieved Chll for constraining model simulations of Gross Primary Productivity (GPP) is evaluated over fields in central Nebraska, U.S.A between 2001 and 2005. Landsat-based Chll time-series records derived from the Regularized Canopy Reflectance model (REGFLEC) are used as forcing to the CLM. Validation of simulated GPP against 15 site-years of flux tower observations demonstrate the utility of Chll as a model constraint, with the coefficient of efficiency increasing from 0.91 to 0.94 and from 0.87 to 0.91 for maize and soybean, respectively. Model performances particularly improve during the late reproductive and senescence stage, where the largest temporal variations in Chll (averaging 35-55 μg cm-2 for maize and 20-35 μg cm-2 for soybean) are observed. While prolonged periods of vegetation stress did not occur over the studied fields, given the usefulness of Chll as an indicator of plant health, enhanced GPP predictabilities should be expected in fields exposed to longer periods of moisture and nutrient stress. While the results support the use of Chll as an observational proxy for Vmax25, future work needs to be directed towards improving the Chll retrieval accuracy from space observations and developing consistent and physically realistic modeling schemes that can be parameterized with acceptable accuracy over spatial and temporal domains.
Gas transfer under high wind and its dependence on wave breaking and sea state
NASA Astrophysics Data System (ADS)
Brumer, Sophia; Zappa, Christopher; Fairall, Christopher; Blomquist, Byron; Brooks, Ian; Yang, Mingxi
2016-04-01
Quantifying greenhouse gas fluxes on regional and global scales relies on parameterizations of the gas transfer velocity K. To first order, K is dictated by wind speed (U) and is typically parameterized as a non-linear functions of U. There is however a large spread in K predicted by the traditional parameterizations at high wind speed. This is because a large variety of environmental forcing and processes (Wind, Currents, Rain, Waves, Breaking, Surfactants, Fetch) actually influence K and wind speed alone cannot capture the variability of air-water gas exchange. At high wind speed especially, breaking waves become a key factor to take into account when estimating gas fluxes. The High Wind Gas exchange Study (HiWinGS) presents the unique opportunity to gain new insights on this poorly understood aspects of air-sea interaction under high winds. The HiWinGS cruise took place in the North Atlantic during October and November 2013. Wind speeds exceeded 15 m s-1 25% of the time, including 48 hrs with U10 > 20 m s-1. Continuous measurements of turbulent fluxes of heat, momentum, and gas (CO2, DMS, acetone and methanol) were taken from the bow of the R/V Knorr. The wave field was sampled by a wave rider buoy and breaking events were tracked in visible imagery was acquired from the port and starboard side of the flying bridge during daylight hours at 20Hz. Taking advantage of the range of physical forcing and wave conditions sampled during HiWinGS, we test existing parameterizations and explore ways of better constraining K based on whitecap coverage, sea state and breaking statistics contrasting pure windseas to swell dominated periods. We distinguish between windseas and swell based on a separation algorithm applied to directional wave spectra for mixed seas, system alignment is considered when interpreting results. The four gases sampled during HiWinGS ranged from being mostly waterside controlled to almost entirely airside controlled. While bubble-mediated transfer appears to be small for moderately soluble gases like DMS, the importance of wave breaking turbulence transport has yet to be determined for all gases regardless of their solubility. This will be addressed by correlating measured K to estimates of active whitecap fraction (WA) and turbulent kinetic energy dissipation rate (ɛ). WA and ɛ are estimated from moments of the breaking crest length distribution derived from the imagery, focusing on young seas, when it is likely that large-scale breaking waves (i.e., whitecapping) will dominate the ɛ.
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav
2009-05-01
The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.
Coupled fvGCM-GCE Modeling System, TRMM Latent Heating and Cloud Library
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2004-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to imiprove the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D GCE model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF will be developed by the end of 2004 and production runs will be conducted at the beginning of 2005. The purpose of this proposal is to augment the current Goddard MMF and other cloud modeling activities. I this talk, I will present: (1) A summary of the second Cloud Modeling Workshop took place at NASA Goddard, (2) A summary of the third TRMM Latent Heating Workshop took place at Nara Japan, (3) A brief discussion on the Goddard research plan of using Weather Research Forecast (WRF) model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.
Coupled fvGCM-GCE Modeling System: TRMM Latent Heating and Cloud Library
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D GCE model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF will be developed by the end of 2004 and production runs will be conducted at the beginning of 2005. The purpose of this proposal is to augment the current Goddard MMF and other cloud modeling activities. In this talk, I will present: (1) A summary of the second Cloud Modeling Workshop took place at NASA Goddard, (2) A summary of the third TRMM Latent Heating Workshop took place at Nara Japan, (3) A brief discussion on the GCE model on developing a global cloud simulator.
Coupled fvGCM-GCE Modeling System, 3D Cloud-Resolving Model and Cloud Library
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud- resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF in being developed and production runs will be conducted at the beginning of 2005. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes, ( 2 ) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), (3) A cloud library generated by Goddard MMF, and 3D GCE model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.
NASA Astrophysics Data System (ADS)
Chen, Ying; Wolke, Ralf; Ran, Liang; Birmili, Wolfram; Spindler, Gerald; Schröder, Wolfram; Su, Hang; Cheng, Yafang; Tegen, Ina; Wiedensohler, Alfred
2018-01-01
The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T), relative humidity (RH), aerosol particle composition, and the surface area concentration (S). However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5) of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91) between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO-MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10-25 September 2013) to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3-]) were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz). The modelled [NO3-] was significantly overestimated for this period by a factor of 5-19, with the corrected NH3 emissions (reduced by 50 %) and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3-] by ˜ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17-18 and 25 September 2013) when [NO3-] was dominated by local chemical formations. In our case, the suppression of organic coating was negligible over western and central Europe, with an influence on [NO3-] of less than 2 % on average and 20 % at the most significant moment. To obtain a significant impact of the organic coating effect, N2O5, SOA, and NH3 need to be present when RH is high and T is low. However, those conditions were rarely fulfilled simultaneously over western and central Europe. Hence, the organic coating effect on the reaction probability of N2O5 may not be as significant as expected over western and central Europe.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Actinide electronic structure and atomic forces
NASA Astrophysics Data System (ADS)
Albers, R. C.; Rudin, Sven P.; Trinkle, Dallas R.; Jones, M. D.
2000-07-01
We have developed a new method[1] of fitting tight-binding parameterizations based on functional forms developed at the Naval Research Laboratory.[2] We have applied these methods to actinide metals and report our success using them (see below). The fitting procedure uses first-principles local-density-approximation (LDA) linear augmented plane-wave (LAPW) band structure techniques[3] to first calculate an electronic-structure band structure and total energy for fcc, bcc, and simple cubic crystal structures for the actinide of interest. The tight-binding parameterization is then chosen to fit the detailed energy eigenvalues of the bands along symmetry directions, and the symmetry of the parameterization is constrained to agree with the correct symmetry of the LDA band structure at each eigenvalue and k-vector that is fit to. By fitting to a range of different volumes and the three different crystal structures, we find that the resulting parameterization is robust and appears to accurately calculate other crystal structures and properties of interest.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
An Aerosol Physical Chemistry Model for the Upper Troposphere
NASA Technical Reports Server (NTRS)
Lin, Jin-Sheng
2001-01-01
This report is the final report for the Cooperative Agreement NCC2-1000. The tasks outlined in the various proposals are listed with a brief comment as to the research performed. The publications titles are: The effects of particle size and nitric acid uptake on the homogenous freezing of sulfate aerosols; Parameterization of an aerosol physical chemistry model (APCM) for the NH3/H2SO4/HNO3/H2O system at cold temperatures; and The onset, extent and duration of dehydration in the Southern Hemisphere polar vortex.
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.; Bertacchi, S.
2014-06-01
Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.
On the Relationship between Observed NLDN Lightning ...
Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparablemore » to Kohn-Sham density functional calculations.« less
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
Morozov, Andrew; Petrovskii, Sergei
2013-01-01
Understanding of complex trophic interactions in ecosystems requires correct descriptions of the rate at which predators consume a variety of different prey species. Field and laboratory data on multispecies communities are rarely sufficient and usually cannot provide an unambiguous test for the theory. As a result, the conventional way of constructing a multi-prey functional response is speculative, and often based on assumptions that are difficult to verify. Predator responses allowing for prey selectivity and active switching are thought to be more biologically relevant compared to the standard proportion-based consumption. However, here we argue that the functional responses with switching may not be applicable to communities with a broad spectrum of resource types. We formulate a set of general rules that a biologically sound parameterization of a predator functional response should satisfy, and show that all existing formulations for the multispecies response with prey selectivity and switching fail to do so. Finally, we propose a universal framework for parameterization of a multi-prey functional response by combining patterns of food selectivity and proportion-based feeding. PMID:24086356
Physically-based Assessment of Tropical Cyclone Damage and Economic Losses
NASA Astrophysics Data System (ADS)
Lin, N.
2012-12-01
Estimating damage and economic losses caused by tropical cyclones (TC) is a topic of considerable research interest in many scientific fields, including meteorology, structural and coastal engineering, and actuarial sciences. One approach is based on the empirical relationship between TC characteristics and loss data. Another is to model the physical mechanism of TC-induced damage. In this talk we discuss about the physically-based approach to predict TC damage and losses due to extreme wind and storm surge. We first present an integrated vulnerability model, which, for the first time, explicitly models the essential mechanisms causing wind damage to residential areas during storm passage, including windborne-debris impact and the pressure-debris interaction that may lead, in a chain reaction, to structural failures (Lin and Vanmarcke 2010; Lin et al. 2010a). This model can be used to predict the economic losses in a residential neighborhood (with hundreds of buildings) during a specific TC (Yau et al. 2011) or applied jointly with a TC risk model (e.g., Emanuel et al 2008) to estimate the expected losses over long time periods. Then we present a TC storm surge risk model that has been applied to New York City (Lin et al. 2010b; Lin et al. 2012; Aerts et al. 2012), Miami-Dade County, Florida (Klima et al. 2011), Galveston, Texas (Lickley, 2012), and other coastal areas around the world (e.g., Tampa, Florida; Persian Gulf; Darwin, Australia; Shanghai, China). These physically-based models are applicable to various coastal areas and have the capability to account for the change of the climate and coastal exposure over time. We also point out that, although made computationally efficient for risk assessment, these models are not suitable for regional or global analysis, which has been a focus of the empirically-based economic analysis (e.g., Hsiang and Narita 2012). A future research direction is to simplify the physically-based models, possibly through parameterization, and make connections to the global loss data and economic analysis.
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.
2015-12-01
There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.
Using Ground Measurements to Examine the Surface Layer Parameterization Scheme in NCEP GFS
NASA Astrophysics Data System (ADS)
Zheng, W.; Ek, M. B.; Mitchell, K.
2017-12-01
Understanding the behavior and the limitation of the surface layer parameneterization scheme is important for parameterization of surface-atmosphere exchange processes in atmospheric models, accurate prediction of near-surface temperature and identifying the role of different physical processes in contributing to errors. In this study, we examine the surface layer paramerization scheme in the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) using the ground flux measurements including the FLUXNET data. The model simulated surface fluxes, surface temperature and vertical profiles of temperature and wind speed are compared against the observations. The limits of applicability of the Monin-Obukhov similarity theory (MOST), which describes the vertical behavior of nondimensionalized mean flow and turbulence properties within the surface layer, are quantified in daytime and nighttime using the data. Results from unstable regimes and stable regimes are discussed.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Coleman, D.; Palmer, T.
2015-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models to represent the variability of unresolved sub-grid processes. They have a beneficial effect on the spread and mean state of medium- and extended-range forecasts (Buizza et al. 1999, Palmer et al. 2009). There is also increasing evidence that stochastic parameterization of unresolved processes could be beneficial for the climate of an atmospheric model through noise enhanced variability, noise-induced drift (Berner et al. 2008), and by enabling the climate simulator to explore other flow regimes (Christensen et al. 2015; Dawson and Palmer 2015). We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. The SPPT scheme accounts for uncertainty in the CAM physical parameterization schemes, including the convection scheme, by perturbing the parametrised temperature, moisture and wind tendencies with a multiplicative noise term. SPPT results in a large improvement in the variability of the CAM4 modeled climate. In particular, SPPT results in a significant improvement to the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. References: Berner, J., Doblas-Reyes, F. J., Palmer, T. N., Shutts, G. J., & Weisheimer, A., 2008. Phil. Trans. R. Soc A, 366, 2559-2577 Buizza, R., Miller, M. and Palmer, T. N., 1999. Q.J.R. Meteorol. Soc., 125, 2887-2908. Christensen, H. M., I. M. Moroz & T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2239-9 Dawson, A. and T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2238-x Palmer, T.N., R. Buizza, F. Doblas-Reyes, et al., 2009, ECMWF technical memorandum 598.
Structure and covariance of cloud and rain water in marine stratocumulus
NASA Astrophysics Data System (ADS)
Witte, Mikael; Morrison, Hugh; Gettelman, Andrew
2017-04-01
Many state of the art cloud microphysics parameterizations in large-scale models use assumed probability density functions (pdfs) to represent subgrid scale variability of relevant resolved scale variables such as vertical velocity and cloud liquid water content (LWC). Integration over the assumed pdfs of small scale variability results in physically consistent prediction of nonlinear microphysical process rates and obviates the need to apply arbitrary tuning parameters to the calculated rates. In such parameterizations, the covariance of cloud and rain LWC is an important quantity for parameterizing the accretion process by which rain drops grow via collection of cloud droplets. This covariance has been diagnosed by other workers from a variety of observational and model datasets (Boutle et al., 2013; Larson and Griffin, 2013; Lebsock et al., 2013), but there is poor agreement in findings across the studies. Two key assumptions that may explain some of the discrepancies among past studies are 1) LWC (both cloud and rain) distributions are statistically stationary and 2) spatial structure may be neglected. Given the highly intermittent nature of precipitation and the fact that cloud LWC has been found to be poorly represented by stationary pdfs (e.g. Marshak et al., 1997), neither of the aforementioned assumptions are valid. Therefore covariance must be evaluated as a function of spatial scale without the assumption of stationary statistics (i.e. variability cannot be expressed as a fractional standard deviation, which necessitates well-defined first and second moments of the LWC distribution). The present study presents multifractal analyses of both rain and cloud LWC using aircraft data from the VOCALS-REx field campaign to illustrate the importance of spatial structure in microphysical parameterizations and extends the results of Boutle et al. (2013) to provide a parameterization of rain-cloud water covariance as a function of spatial scale without the assumption of statistical stationarity.
NASA Astrophysics Data System (ADS)
Peishu, Zong; Jianping, Tang; Shuyu, Wang; Lingyun, Xie; Jianwei, Yu; Yunqian, Zhu; Xiaorui, Niu; Chao, Li
2017-08-01
The parameterization of physical processes is one of the critical elements to properly simulate the regional climate over eastern China. It is essential to conduct detailed analyses on the effect of physical parameterization schemes on regional climate simulation, to provide more reliable regional climate change information. In this paper, we evaluate the 25-year (1983-2007) summer monsoon climate characteristics of precipitation and surface air temperature by using the regional spectral model (RSM) with different physical schemes. The ensemble results using the reliability ensemble averaging (REA) method are also assessed. The result shows that the RSM model has the capacity to reproduce the spatial patterns, the variations, and the temporal tendency of surface air temperature and precipitation over eastern China. And it tends to predict better climatology characteristics over the Yangtze River basin and the South China. The impact of different physical schemes on RSM simulations is also investigated. Generally, the CLD3 cloud water prediction scheme tends to produce larger precipitation because of its overestimation of the low-level moisture. The systematic biases derived from the KF2 cumulus scheme are larger than those from the RAS scheme. The scale-selective bias correction (SSBC) method improves the simulation of the temporal and spatial characteristics of surface air temperature and precipitation and advances the circulation simulation capacity. The REA ensemble results show significant improvement in simulating temperature and precipitation distribution, which have much higher correlation coefficient and lower root mean square error. The REA result of selected experiments is better than that of nonselected experiments, indicating the necessity of choosing better ensemble samples for ensemble.
Modelling the Evolution of Sea Spray Droplets on a Global Scale
NASA Astrophysics Data System (ADS)
Staniec, A.; Vlahos, P.; Monahan, E. C.
2017-12-01
Sea spray droplets are an important mechanism for the transport of moisture, heat, and organic material between the ocean and the atmosphere. Spume droplets are the largest of the size spectrum and as such have the potential to transport significant amounts of energy and gases despite their generally short residence time in the atmosphere. A model is developed based on the physical parameterizations from Andreas et al. (1995, 2005)and a range of spume generation functions, coupled with a biogeochemical exchange model for gases developed here to examine the equilibrium temperature and gas exchange of spume droplets under representative open ocean conditions. The modelling approach uses micro-physics to simulate the expected changes to the droplet as it equilibrates with the atmospheric temperature and relative humidity. The effect of temperature differentials and relative humidity variations is explored. A global approach is simulated by using average summer and winter values for SST, salinity, and air temperature throughout the various ocean basins.
Computational discovery of extremal microstructure families
Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech
2018-01-01
Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124
Upgrades, Current Capabilities and Near-Term Plans of the NASA ARC Mars Climate
NASA Technical Reports Server (NTRS)
Hollingsworth, J. L.; Kahre, Melinda April; Haberle, Robert M.; Schaeffer, James R.
2012-01-01
We describe and review recent upgrades to the ARC Mars climate modeling framework, in particular, with regards to physical parameterizations (i.e., testing, implementation, modularization and documentation); the current climate modeling capabilities; selected research topics regarding current/past climates; and then, our near-term plans related to the NASA ARC Mars general circulation modeling (GCM) project.
Geometry modeling and grid generation using 3D NURBS control volume
NASA Technical Reports Server (NTRS)
Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin
1995-01-01
The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.
National ESPC Committee Support
2015-09-30
to the physical parameterization driver software at Navy, NOAA , NASA , and AFWA. This interoperability capability will allow for more...core from another system. Under NUOPC funding, ESMF development will be completed, maintained and evolved to address DoD and NOAA requirements. In...operational NWP centers; however, it also involves collaboration with other primary NWP development centers such as NASA , NCAR, and DOE and will
WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'
NASA Astrophysics Data System (ADS)
Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne
2015-10-01
Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.
Non-perturbational surface-wave inversion: A Dix-type relation for surface waves
Haney, Matt; Tsai, Victor C.
2015-01-01
We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.
Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.
2014-01-01
This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.
NASA Astrophysics Data System (ADS)
Harrington, J. Y.
2017-12-01
Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall lines show a substantial influence to predicted particle properties: The more natural evolution of ice crystals during riming produces graupel-like particles with size and fall-speeds required for the formation of a classic transition zone and extended stratiform precipitation region.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Seasonal Parameterizations of the Tau-Omega Model Using the ComRAD Ground-Based SMAP Simulator
NASA Technical Reports Server (NTRS)
O'Neill, P.; Joseph, A.; Srivastava, P.; Cosh, M.; Lang, R.
2014-01-01
NASA's Soil Moisture Active Passive (SMAP) mission is scheduled for launch in November 2014. In the prelaunch time frame, the SMAP team has focused on improving retrieval algorithms for the various SMAP baseline data products. The SMAP passive-only soil moisture product depends on accurate parameterization of the tau-omega model to achieve the required accuracy in soil moisture retrieval. During a field experiment (APEX12) conducted in the summer of 2012 under dry conditions in Maryland, the Combined Radar/Radiometer (ComRAD) truck-based SMAP simulator collected active/passive microwave time series data at the SMAP incident angle of 40 degrees over corn and soybeans throughout the crop growth cycle. A similar experiment was conducted only over corn in 2002 under normal moist conditions. Data from these two experiments will be analyzed and compared to evaluate how changes in vegetation conditions throughout the growing season in both a drought and normal year can affect parameterizations in the tau-omega model for more accurate soil moisture retrieval.
Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T
2014-01-01
This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hierarchical atom type definitions and extensible all-atom force fields.
Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai
2016-03-15
The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Performance of multi-physics ensembles in convective precipitation events over northeastern Spain
NASA Astrophysics Data System (ADS)
García-Ortega, E.; Lorenzana, J.; Merino, A.; Fernández-González, S.; López, L.; Sánchez, J. L.
2017-07-01
Convective precipitation with hail greatly affects southwestern Europe, causing major economic losses. The local character of this meteorological phenomenon is a serious obstacle to forecasting. Therefore, the development of reliable short-term forecasts constitutes an essential challenge to minimizing and managing risks. However, deterministic outcomes are affected by different uncertainty sources, such as physics parameterizations. This study examines the performance of different combinations of physics schemes of the Weather Research and Forecasting model to describe the spatial distribution of precipitation in convective environments with hail falls. Two 30-member multi-physics ensembles, with two and three domains of maximum resolution 9 and 3km each, were designed using various combinations of cumulus, microphysics and radiation schemes. The experiment was evaluated for 10 convective precipitation days with hail over 2005-2010 in northeastern Spain. Different indexes were used to evaluate the ability of each ensemble member to capture the precipitation patterns, which were compared with observations of a rain-gauge network. A standardized metric was constructed to identify optimal performers. Results show interesting differences between the two ensembles. In two domain simulations, the selection of cumulus parameterizations was crucial, with the Betts-Miller-Janjic scheme the best. In contrast, the Kain-Fristch cumulus scheme gave the poorest results, suggesting that it should not be used in the study area. Nevertheless, in three domain simulations, the cumulus schemes used in coarser domains were not critical and the best results depended mainly on microphysics schemes. The best performance was shown by Morrison, New Thomson and Goddard microphysics.
Blanton, Brian; Dresback, Kendra; Colle, Brian; Kolar, Randy; Vergara, Humberto; Hong, Yang; Leonardo, Nicholas; Davidson, Rachel; Nozick, Linda; Wachtendorf, Tricia
2018-04-25
Hurricane track and intensity can change rapidly in unexpected ways, thus making predictions of hurricanes and related hazards uncertain. This inherent uncertainty often translates into suboptimal decision-making outcomes, such as unnecessary evacuation. Representing this uncertainty is thus critical in evacuation planning and related activities. We describe a physics-based hazard modeling approach that (1) dynamically accounts for the physical interactions among hazard components and (2) captures hurricane evolution uncertainty using an ensemble method. This loosely coupled model system provides a framework for probabilistic water inundation and wind speed levels for a new, risk-based approach to evacuation modeling, described in a companion article in this issue. It combines the Weather Research and Forecasting (WRF) meteorological model, the Coupled Routing and Excess STorage (CREST) hydrologic model, and the ADvanced CIRCulation (ADCIRC) storm surge, tide, and wind-wave model to compute inundation levels and wind speeds for an ensemble of hurricane predictions. Perturbations to WRF's initial and boundary conditions and different model physics/parameterizations generate an ensemble of storm solutions, which are then used to drive the coupled hydrologic + hydrodynamic models. Hurricane Isabel (2003) is used as a case study to illustrate the ensemble-based approach. The inundation, river runoff, and wind hazard results are strongly dependent on the accuracy of the mesoscale meteorological simulations, which improves with decreasing lead time to hurricane landfall. The ensemble envelope brackets the observed behavior while providing "best-case" and "worst-case" scenarios for the subsequent risk-based evacuation model. © 2018 Society for Risk Analysis.
An Updated Nuclear Equation of State for Neutron Stars and Supernova Simulations
NASA Astrophysics Data System (ADS)
Meixner, M. A.; Mathews, G. J.; Dalhed, H. E.; Lan, N. Q.
2011-10-01
We present an updated and improved Equation of State based upon the framework originally developed by Bowers & Wilson. The details of the EoS and improvements are described along with a description of how to access this EOS for numerical simulations. Among the improvements are an updated compressibility based upon recent measurements, the possibility of the formation of proton excess (Ye> 0.5) material and an improved treatment of the nuclear statistical equilibrium and the transition to pasta nuclei as the density approaches nuclear matter density. The possibility of a QCD chiral phase transition is also included at densities above nuclear matter density. We show comparisons of this EOS with the other two publicly available equations of state used in supernova collapse simulations. The advantages of the present EoS is that it is easily amenable to phenomenological parameterization to fit observed explosion properties and to accommodate new physical parameters.
NASA Astrophysics Data System (ADS)
Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven
2016-07-01
Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.
NASA Astrophysics Data System (ADS)
Savre, J.; Ekman, A. M. L.
2015-05-01
A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.
NASA Astrophysics Data System (ADS)
He, C.; Liou, K. N.; Takano, Y.; Yang, P.; Li, Q.; Chen, F.
2017-12-01
A set of parameterizations is developed for spectral single-scattering properties of clean and black carbon (BC)-contaminated snow based on geometric-optic surface-wave (GOS) computations, which explicitly resolves BC-snow internal mixing and various snow grain shapes. GOS calculations show that, compared with nonspherical grains, volume-equivalent snow spheres show up to 20% larger asymmetry factors and hence stronger forward scattering, particularly at wavelengths <1 mm. In contrast, snow grain sizes have a rather small impact on the asymmetry factor at wavelengths <1 mm, whereas size effects are important at longer wavelengths. The snow asymmetry factor is parameterized as a function of effective size, aspect ratio, and shape factor, and shows excellent agreement with GOS calculations. According to GOS calculations, the single-scattering coalbedo of pure snow is predominantly affected by grain sizes, rather than grain shapes, with higher values for larger grains. The snow single-scattering coalbedo is parameterized in terms of the effective size that combines shape and size effects, with an accuracy of >99%. Based on GOS calculations, BC-snow internal mixing enhances the snow single-scattering coalbedo at wavelengths <1 mm, but it does not alter the snow asymmetry factor. The BC-induced enhancement ratio of snow single-scattering coalbedo, independent of snow grain size and shape, is parameterized as a function of BC concentration with an accuracy of >99%. Overall, in addition to snow grain size, both BC-snow internal mixing and snow grain shape play critical roles in quantifying BC effects on snow optical properties. The present parameterizations can be conveniently applied to snow, land surface, and climate models including snowpack radiative transfer processes.
NASA Astrophysics Data System (ADS)
Hristova-Veleva, S.; Chao, Y.; Vane, D.; Lambrigtsen, B.; Li, P. P.; Knosp, B.; Vu, Q. A.; Su, H.; Dang, V.; Fovell, R.; Tanelli, S.; Garay, M.; Willis, J.; Poulsen, W.; Fishbein, E.; Ao, C. O.; Vazquez, J.; Park, K. J.; Callahan, P.; Marcus, S.; Haddad, Z.; Fetzer, E.; Kahn, R.
2007-12-01
In spite of recent improvements in hurricane track forecast accuracy, currently there are still many unanswered questions about the physical processes that determine hurricane genesis, intensity, track and impact on large- scale environment. Furthermore, a significant amount of work remains to be done in validating hurricane forecast models, understanding their sensitivities and improving their parameterizations. None of this can be accomplished without a comprehensive set of multiparameter observations that are relevant to both the large- scale and the storm-scale processes in the atmosphere and in the ocean. To address this need, we have developed a prototype of a comprehensive hurricane information system of high- resolution satellite, airborne and in-situ observations and model outputs pertaining to: i) the thermodynamic and microphysical structure of the storms; ii) the air-sea interaction processes; iii) the larger-scale environment as depicted by the SST, ocean heat content and the aerosol loading of the environment. Our goal was to create a one-stop place to provide the researchers with an extensive set of observed hurricane data, and their graphical representation, together with large-scale and convection-resolving model output, all organized in an easy way to determine when coincident observations from multiple instruments are available. Analysis tools will be developed in the next step. The analysis tools will be used to determine spatial, temporal and multiparameter covariances that are needed to evaluate model performance, provide information for data assimilation and characterize and compare observations from different platforms. We envision that the developed hurricane information system will help in the validation of the hurricane models, in the systematic understanding of their sensitivities and in the improvement of the physical parameterizations employed by the models. Furthermore, it will help in studying the physical processes that affect hurricane development and impact on large-scale environment. This talk will describe the developed prototype of the hurricane information systems. Furthermore, we will use a set of WRF hurricane simulations and compare simulated to observed structures to illustrate how the information system can be used to discriminate between simulations that employ different physical parameterizations. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics ans Space Administration.
A general science-based framework for dynamical spatio-temporal models
Wikle, C.K.; Hooten, M.B.
2010-01-01
Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.
Simulation of semi-explicit mechanisms of SOA formation from glyoxal in a 3D model
NASA Astrophysics Data System (ADS)
Knote, C. J.; Hodzic, A.; Jimenez, J. L.; Volkamer, R.; Orlando, J. J.; Baidar, S.; Brioude, J. F.; Fast, J. D.; Gentner, D. R.; Goldstein, A. H.; Hayes, P. L.; Knighton, W. B.; Oetjen, H.; Setyan, A.; Stark, H.; Thalman, R. M.; Tyndall, G. S.; Washenfelder, R. A.; Waxman, E.; Zhang, Q.
2013-12-01
Formation of secondary organic aerosols (SOA) through multi-phase processing of glyoxal has been proposed recently as a relevant contributor to SOA mass. Glyoxal has both anthropogenic and biogenic sources, and readily partitions into the aqueous-phase of cloud droplets and aerosols. Both reversible and irreversible chemistry in the liquid-phase has been observed. A recent laboratory study indicates that the presence of salts in the liquid-phase strongly enhances the Henry';s law constant of glyoxal, allowing for much more effective multi-phase processing. In our work we investigate the contribution of glyoxal to SOA formation on the regional scale. We employ the regional chemistry transport model WRF-chem with MOZART gas-phase chemistry and MOSAIC aerosols, which we both extended to improve the description of glyoxal formation in the gas-phase, and its interactions with aerosols. The detailed description of aerosols in our setup allows us to compare very simple (uptake coefficient) parameterizations of SOA formation from glyoxal, as has been used in previous modeling studies, with much more detailed descriptions of the various pathways postulated based on laboratory studies. Measurements taken during the CARES and CalNex campaigns in California in summer 2010 allowed us to constrain the model, including the major direct precursors of glyoxal. Simulations at convection-permitting resolution over a 2 week period in June 2010 have been conducted to assess the effect of the different ways to parameterize SOA formation from glyoxal and investigate its regional variability. We find that depending on the parameterization used the contribution of glyoxal to SOA is between 1 and 15% in the LA basin during this period, and that simple parameterizations based on uptake coefficients derived from box model studies lead to higher contributions (15%) than parameterizations based on lab experiments (1%). A kinetic limitation found in experiments hinders substantial contribution of volume-based pathways to total SOA formation from glyoxal. Once removed, 5% of total SOA can be formed from glyoxal through these channels. Results from a year-long simulation over the continental US will give a broader picture of the contribution of glyoxal to SOA formation.
Uncertainty analysis of signal deconvolution using a measured instrument response function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.
2016-10-05
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less
A comparison study of two snow models using data from different Alpine sites
NASA Astrophysics Data System (ADS)
Piazzi, Gaia; Riboust, Philippe; Campo, Lorenzo; Cremonese, Edoardo; Gabellani, Simone; Le Moine, Nicolas; Morra di Cella, Umberto; Ribstein, Pierre; Thirel, Guillaume
2017-04-01
The hydrological balance of an Alpine catchment is strongly affected by snowpack dynamics. Melt-water supplies a significant component of the annual water budget, both in terms of soil moisture and runoff, which play a critical role in floods generation and impact water resource management in snow-dominated basins. Several snow models have been developed with variable degrees of complexity, mainly depending on their target application and the availability of computational resources and data. According to the level of detail, snow models range from statistical snowmelt-runoff and degree-day methods using composite snow-soil or explicit snow layer(s), to physically-based and energy balance snow models, consisting of detailed internal snow-process schemes. Intermediate-complexity approaches have been widely developed resulting in simplified versions of the physical parameterization schemes with a reduced snowpack layering. Nevertheless, an increasing model complexity does not necessarily entail improved model simulations. This study presents a comparison analysis between two snow models designed for hydrological purposes. The snow module developed at UPMC and IRSTEA is a mono-layer energy balance model analytically resolving heat and phase change equations into the snowpack. Vertical mass exchange into the snowpack is also analytically resolved. The model is intended to be used for hydrological studies but also to give a realistic estimation of the snowpack state at watershed scale (SWE and snow depth). The structure of the model allows it to be easily calibrated using snow observation. This model is further presented in EGU2017-7492. The snow module of SMASH (Snow Multidata Assimilation System for Hydrology) consists in a multi-layer snow dynamic scheme. It is physically based on mass and energy balances and it reproduces the main physical processes occurring within the snowpack: accumulation, density dynamics, melting, sublimation, radiative balance, heat and mass exchanges. The model is driven by observed forcing meteorological data (air temperature, wind velocity, relative air humidity, precipitation and incident solar radiation) to provide an estimation of the snowpack state. In this study, no DA is used. For more details on the DA scheme, please see EGU2017-7777. Observed data supplied by meteorological stations located in three experimental Alpine sites are used: Col de Porte (1325 m, France); Torgnon (2160 m, Italy); Weissfluhjoch (2540 m, Switzerland). Performances of the two models are compared through evaluations of snow mass, snow depth, albedo and surface temperature simulations in order to better understand and pinpoint limits and potentialities of the analyzed schemes and the impact of different parameterizations on models simulations.
Natural ocean carbon cycle sensitivity to parameterizations of the recycling in a climate model
NASA Astrophysics Data System (ADS)
Romanou, A.; Romanski, J.; Gregg, W. W.
2014-02-01
Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10%) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34%, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which eventually resurfaces with the global thermohaline circulation especially in the Southern Ocean. Because of the reduced primary production and carbon export in GISSEH compared to GISSER, the biological pump efficiency, i.e., the ratio of primary production and carbon export at 75 m, is half in the GISSEH of that in GISSER, The Southern Ocean emerges as a key region where the CO2 flux is as sensitive to biological parameterizations as it is to physical parameterizations. The fidelity of ocean mixing in the Southern Ocean compared to observations is shown to be a good indicator of the magnitude of the biological pump efficiency regardless of physical model choice.
Natural Ocean Carbon Cycle Sensitivity to Parameterizations of the Recycling in a Climate Model
NASA Technical Reports Server (NTRS)
Romanou, A.; Romanski, J.; Gregg, W. W.
2014-01-01
Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate modeling system are explored here. Results are presented from twin control simulations of the air-sea CO2 gas exchange using two different ocean models coupled to the same atmosphere. The two ocean models (Russell ocean model and Hybrid Coordinate Ocean Model, HYCOM) use different vertical coordinate systems, and therefore different representations of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are compared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mixing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55% (10 %) in the Southern Ocean and overestimate it by about 17% (underestimate by 2%) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12-34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Consequently, carbon export is reduced because of reduced production at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal regions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection and mesoscale eddy transport. At depth, GISSER, which has a significant warm bias, remineralizes nutrients and carbon faster thereby producing more nutrients and carbon at depth, which eventually resurfaces with the global thermohaline circulation especially in the Southern Ocean. Because of the reduced primary production and carbon export in GISSEH compared to GISSER, the biological pump efficiency, i.e., the ratio of primary production and carbon export at 75 m, is half in the GISSEH of that in GISSER, The Southern Ocean emerges as a key region where the CO2 flux is as sensitive to biological parameterizations as it is to physical parameterizations. The fidelity of ocean mixing in the Southern Ocean compared to observations is shown to be a good indicator of the magnitude of the biological pump efficiency regardless of physical model choice.
Importance of Chemical Composition of Ice Nuclei on the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, Setigui Aboubacar; Girard, Eric
2016-09-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation remain poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TICs-1 are composed by non-precipitating small (radar-unseen) ice crystals of less than 30 μm in diameter. The second type, TICs-2, are detected by radar and are characterized by a low concentration of large precipitating ice crystals ice crystals (>30 μm). To explain these differences, we hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibits the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a lower concentration of larger ice crystals. Water vapor available for deposition being the same, these crystals reach a larger size. Current weather and climate models cannot simulate these different types of ice clouds. This problem is partly due to the parameterizations implemented for ice nucleation. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation on IN of different chemical compositions have been developed. These parameterizations are based on two approaches: stochastic (that is nucleation is a probabilistic process, which is time dependent) and singular (that is nucleation occurs at fixed conditions of temperature and humidity and time-independent). The best approach remains unclear. This research aims to better understand the formation process of Arctic TICs using recently developed ice nucleation parameterizations. For this purpose, we have implemented these ice nucleation parameterizations into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska. Simulation results of the TICs-2 observed on April 15th and 25th (acidic cases) and TICs-1 observed on April 5th (non-acidic cases) are presented. Our results show that the stochastic approach based on the classical nucleation theory with the appropriate contact angle is better. Parameterizations of ice nucleation based on the singular approach tend to overestimate the ice crystal concentration in TICs-1 and TICs-2. The classical nucleation theory using the appropriate contact angle is the best approach to use to simulate the ice clouds investigated in this research.
Exploring New Pathways in Precipitation Assimilation
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara Q.
2004-01-01
Precipitation assimilation poses a special challenge in that the forward model for rain in a global forecast system is based on parameterized physics, which can have large systematic errors that must be rectified to use precipitation data effectively within a standard statistical analysis framework. We examine some key issues in precipitation assimilation and describe several exploratory studies in assimilating rainfall and latent heating information in NASA's global data assimilation systems using the forecast model as a weak constraint. We present results from two research activities. The first is the assimilation of surface rainfall data using a time-continuous variational assimilation based on a column model of the full moist physics. The second is the assimilation of convective and stratiform latent heating retrievals from microwave sensors using a variational technique with physical parameters in the moist physics schemes as a control variable. We will show the impact of assimilating these data on analyses and forecasts. Among the lessons learned are (1) that the time-continuous application of moisture/temperature tendency corrections to mitigate model deficiencies offers an effective strategy for assimilating precipitation information, and (2) that the model prognostic variables must be allowed to directly respond to an improved rain and latent heating field within an analysis cycle to reap the full benefit of assimilating precipitation information. of microwave radiances versus retrieval information in raining areas, and initial efforts in developing ensemble techniques such as Kalman filter/smoother for precipitation assimilation. Looking to the future, we discuss new research directions including the assimilation
Preferential flow across scales: how important are plot scale processes for a catchment scale model?
NASA Astrophysics Data System (ADS)
Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian
2017-04-01
Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.
Saenz, Juan A.; Chen, Qingshan; Ringler, Todd
2015-05-19
Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less
Implementing a warm cloud microphysics parameterization for convective clouds in NCAR CESM
NASA Astrophysics Data System (ADS)
Shiu, C.; Chen, Y.; Chen, W.; Li, J. F.; Tsai, I.; Chen, J.; Hsu, H.
2013-12-01
Most of cumulus convection schemes use simple empirical approaches to convert cloud liquid mass to rain water or cloud ice to snow e.g. using a constant autoconversion rate and dividing cloud liquid mass into cloud water and ice as function of air temperature (e.g. Zhang and McFarlane scheme in NCAR CAM model). There are few studies trying to use cloud microphysical schemes to better simulate such precipitation processes in the convective schemes of global models (e.g. Lohmann [2008] and Song, Zhang, and Li [2012]). A two-moment warm cloud parameterization (i.e. Chen and Liu [2004]) is implemented into the deep convection scheme of CAM5.2 of CESM model for treatment of conversion of cloud liquid water to rain water. Short-term AMIP type global simulations are conducted to evaluate the possible impacts from the modification of this physical parameterization. Simulated results are further compared to observational results from AMWG diagnostic package and CloudSAT data sets. Several sensitivity tests regarding to changes in cloud top droplet concentration (here as a rough testing for aerosol indirect effects) and changes in detrained cloud size of convective cloud ice are also carried out to understand their possible impacts on the cloud and precipitation simulations.
Pattanayak, Sujata; Mohanty, U C; Osuri, Krishna K
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error.
NASA Astrophysics Data System (ADS)
Boone, Aaron; Samuelsson, Patrick; Gollvik, Stefan; Napoly, Adrien; Jarlan, Lionel; Brun, Eric; Decharme, Bertrand
2017-02-01
Land surface models (LSMs) are pushing towards improved realism owing to an increasing number of observations at the local scale, constantly improving satellite data sets and the associated methodologies to best exploit such data, improved computing resources, and in response to the user community. As a part of the trend in LSM development, there have been ongoing efforts to improve the representation of the land surface processes in the interactions between the soil-biosphere-atmosphere (ISBA) LSM within the EXternalized SURFace (SURFEX) model platform. The force-restore approach in ISBA has been replaced in recent years by multi-layer explicit physically based options for sub-surface heat transfer, soil hydrological processes, and the composite snowpack. The representation of vegetation processes in SURFEX has also become much more sophisticated in recent years, including photosynthesis and respiration and biochemical processes. It became clear that the conceptual limits of the composite soil-vegetation scheme within ISBA had been reached and there was a need to explicitly separate the canopy vegetation from the soil surface. In response to this issue, a collaboration began in 2008 between the high-resolution limited area model (HIRLAM) consortium and Météo-France with the intention to develop an explicit representation of the vegetation in ISBA under the SURFEX platform. A new parameterization has been developed called the ISBA multi-energy balance (MEB) in order to address these issues. ISBA-MEB consists in a fully implicit numerical coupling between a multi-layer physically based snowpack model, a variable-layer soil scheme, an explicit litter layer, a bulk vegetation scheme, and the atmosphere. It also includes a feature that permits a coupling transition of the snowpack from the canopy air to the free atmosphere. It shares many of the routines and physics parameterizations with the standard version of ISBA. This paper is the first of two parts; in part one, the ISBA-MEB model equations, numerical schemes, and theoretical background are presented. In part two (Napoly et al., 2016), which is a separate companion paper, a local scale evaluation of the new scheme is presented along with a detailed description of the new forest litter scheme.
Optical Extinction and Aerosol Hygroscopicity in the Southeastern United States
NASA Astrophysics Data System (ADS)
Brock, C. A.; Gordon, T.; Wagner, N.; Lack, D. A.; Richardson, M.; Middlebrook, A. M.; Liao, J.; Murphy, D. M.; Attwood, A. R.; Washenfelder, R. A.; Campuzano Jost, P.; Day, D. A.; Jimenez, J. L.; Carlton, A. M. G.
2015-12-01
Most aerosol particles take up water and grow as relative humidity increases, leading to increased optical extinction, reduced visibility, greater aerosol optical depths (AODs), and altered radiative forcing, even while dry particulate mass remains constant. Relative humidity varies greatly temporally, horizontally, and especially vertically. Thus hygroscopicity is a confounding factor when attempting to link satellite-based observations of AOD to surface measurements of particulate mass or to model predictions of aerosol mass concentrations. Airborne observations of aerosol optical, chemical, and microphysical properties were made in the southeastern United States in the daytime in summer 2013 during the NOAA SENEX and NASA SEAC4RS projects. Applying κ-Köhler theory for hygroscopic growth to these data, the inferred hygroscopicity parameter κ for the organic fraction of the aerosol was <0.11. This κ for organics is toward the lower end of values found from laboratory studies of the aerosol formed from oxidation of biogenic precursors and from several field studies in rural environments. The gamma (γ) parameterization is commonly used to describe the change in aerosol extinction as a function of relative humidity. Because this formulation did not fit the airborne data well, a new parameterization was developed that better describes the observations. This new single-parameter κext formulation is physically based and relies upon the well-known approximately linear relationship between particle volume and optical extinction. The fitted parameter, κext, is nonlinearly related to the chemically derived κ parameter used in κ-Köhler theory. The values of κext determined from the airborne measurements are consistent with independent observations at a nearby ground site.
Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction.
Cheng, Jian; Jiang, Tianzi; Deriche, Rachid
2012-01-01
In High Angular Resolution Diffusion Imaging (HARDI), Orientation Distribution Function (ODF) and Ensemble Average Propagator (EAP) are two important Probability Density Functions (PDFs) which reflect the water diffusion and fiber orientations. Spherical Polar Fourier Imaging (SPFI) is a recent model-free multi-shell HARDI method which estimates both EAP and ODF from the diffusion signals with multiple b values. As physical PDFs, ODFs and EAPs are nonnegative definite respectively in their domains S2 and R3. However, existing ODF/EAP estimation methods like SPFI seldom consider this natural constraint. Although some works considered the nonnegative constraint on the given discrete samples of ODF/EAP, the estimated ODF/EAP is not guaranteed to be nonnegative definite in the whole continuous domain. The Riemannian framework for ODFs and EAPs has been proposed via the square root parameterization based on pre-estimated ODFs and EAPs by other methods like SPFI. However, there is no work on how to estimate the square root of ODF/EAP called as the wavefuntion directly from diffusion signals. In this paper, based on the Riemannian framework for ODFs/EAPs and Spherical Polar Fourier (SPF) basis representation, we propose a unified model-free multi-shell HARDI method, named as Square Root Parameterized Estimation (SRPE), to simultaneously estimate both the wavefunction of EAPs and the nonnegative definite ODFs and EAPs from diffusion signals. The experiments on synthetic data and real data showed SRPE is more robust to noise and has better EAP reconstruction than SPFI, especially for EAP profiles at large radius.
Gerber, Stefan; Brookshire, E N Jack
2014-03-01
Nutrient limitation in terrestrial ecosystems is often accompanied with maintaining a nearly closed vegetation-soil nutrient cycle. The ability to retain nutrients in an ecosystem requires the capacity of the plant-soil system to draw down nutrient levels in soils effectually such that export concentrations in soil solutions remain low. Here we address the physical constraints of plant nutrient uptake that may be limited by the diffusive movement of nutrients in soils, by the uptake at the root/mycorrhizal surface, and from interactions with soil water flow. We derive an analytical framework of soil nutrient transport and uptake and predict levels of plant available nutrient concentration and residence time. Our results, which we evaluate for nitrogen, show that the physical environment permits plants to lower soil solute concentration substantially. Our analysis confirms that plant uptake capacities in soils are considerable, such that water movement in soils is generally too small to significantly erode dissolved plant-available nitrogen. Inorganic nitrogen concentrations in headwater streams are congruent with the prediction of our theoretical framework. Our framework offers a physical-based parameterization of nutrient uptake in ecosystem models and has the potential to serve as an important tool toward scaling biogeochemical cycles from individual roots to landscapes.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
NASA Astrophysics Data System (ADS)
Stachura, M.; Herzfeld, U. C.; McDonald, B.; Weltman, A.; Hale, G.; Trantow, T.
2012-12-01
The dynamical processes that occur during the surge of a large, complex glacier system are far from being understood. The aim of this paper is to derive a parameterization of surge characteristics that captures the principle processes and can serve as the basis for a dynamic surge model. Innovative mathematical methods are introduced that facilitate derivation of such a parameterization from remote-sensing observations. Methods include automated geostatistical characterization and connectionist-geostatistical classification of dynamic provinces and deformation states, using the vehicle of crevasse patterns. These methods are applied to analyze satellite and airborne image and laser altimeter data collected during the current surge of Bering Glacier and Bagley Ice Field, Alaska.
Strong parameterization and coordination encirclements of graph of Penrose tiling vertices
NASA Astrophysics Data System (ADS)
Shutov, A. V.; Maleev, A. V.
2017-07-01
The coordination encirclements in a graph of Penrose tiling vertices have been investigated based on the analysis of vertice parameters. A strong parameterization of these vertices is developed in the form of a tiling of a parameter set in the region corresponding to different first coordination encirclements of vertices. An algorithm for constructing tilings of a set of parameters determining different coordination encirclements in a graph of Penrose tiling vertices of order n is proposed.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Walder, J.S.
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
NASA Astrophysics Data System (ADS)
Piskozub, Jacek; Wróbel, Iwona
2016-04-01
The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations. The first one is the fact that most of the k functions intersect close to 9 m/s, the typical North Atlantic wind speeds. The squared and cubed function need to intersect in order to have similar global averages. This way the higher values of cubic functions for strong winds are offset by higher values of squared ones for weak ones. The wind speed of the intersection has to be higher than global wind speed average because discrepancies between different parameterizations increase with the wind speed. The North Atlantic region seem to have by chance just the right average wind speeds to make all the parameterizations resulting in similar annual fluxes. However there is a second reason for smaller inter-parameterization discrepancies in the North Atlantic than many other ocean basins. The North Atlantic CO2 fluxes are downward in every month. In many regions of the world, the direction of the flux changes between the winter and summer with wind speeds much stronger in the cold season. We show, using the actual formulas that in such a case the differences between the parameterizations partly cancel out which is not the case when the flux never changes its direction. Both the mechanisms accidentally make the North Atlantic an area where the choice of k parameterizations causes very small flux uncertainty in annual fluxes. On the other hand, it makes the North Atlantic data not very useful for choosing the parameterizations most closely representing real fluxes.
Diagnosing the Ice Crystal Enhancement Factor in the Tropics
NASA Technical Reports Server (NTRS)
Zeng, Xiping; Tao, Wei-Kuo; Matsui, Toshihisa; Xie, Shaocheng; Lang, Stephen; Zhang, Minghua; Starr, David O'C; Li, Xiaowen; Simpson, Joanne
2009-01-01
Recent modeling studies have revealed that ice crystal number concentration is one of the dominant factors in the effect of clouds on radiation. Since the ice crystal enhancement factor and ice nuclei concentration determine the concentration, they are both important in quantifying the contribution of increased ice nuclei to global warming. In this study, long-term cloud-resolving model (CRM) simulations are compared with field observations to estimate the ice crystal enhancement factor in tropical and midlatitudinal clouds, respectively. It is found that the factor in tropical clouds is 10 3-104 times larger than that of mid-latitudinal ones, which makes physical sense because entrainment and detrainment in the Tropics are much stronger than in middle latitudes. The effect of entrainment/detrainment on the enhancement factor, especially in tropical clouds, suggests that cloud microphysical parameterizations should be coupled with subgrid turbulence parameterizations within CRMs to obtain a more accurate depiction of cloud-radiative forcing.
Explaining the convector effect in canopy turbulence by means of large-eddy simulation
Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias
2017-06-20
Semi-arid forests are found to sustain a massive sensible heat flux in spite of having a low surface to air temperature difference by lowering the aerodynamic resistance to heat transfer ( r H) – a property called the canopy convector effect (CCE). In this work large-eddy simulations are used to demonstrate that the CCE appears more generally in canopy turbulence. It is indeed a generic feature of canopy turbulence: r H of a canopy is found to reduce with increasing unstable stratification, which effectively increases the aerodynamic roughness for the same physical roughness of the canopy. This relation offers a sufficientmore » condition to construct a general description of the CCE. In addition, we review existing parameterizations for r H from the evapotranspiration literature and test to what extent they are able to capture the CCE, thereby exploring the possibility of an improved parameterization.« less
A frequentist approach to computer model calibration
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
2016-05-05
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
Large-eddy simulations of a Salt Lake Valley cold-air pool
NASA Astrophysics Data System (ADS)
Crosman, Erik T.; Horel, John D.
2017-09-01
Persistent cold-air pools are often poorly forecast by mesoscale numerical weather prediction models, in part due to inadequate parameterization of planetary boundary-layer physics in stable atmospheric conditions, and also because of errors in the initialization and treatment of the model surface state. In this study, an improved numerical simulation of the 27-30 January 2011 cold-air pool in Utah's Great Salt Lake Basin is obtained using a large-eddy simulation with more realistic surface state characterization. Compared to a Weather Research and Forecasting model configuration run as a mesoscale model with a planetary boundary-layer scheme where turbulence is highly parameterized, the large-eddy simulation more accurately captured turbulent interactions between the stable boundary-layer and flow aloft. The simulations were also found to be sensitive to variations in the Great Salt Lake temperature and Salt Lake Valley snow cover, illustrating the importance of land surface state in modelling cold-air pools.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model
NASA Astrophysics Data System (ADS)
Bachman, Scott D.; Anstey, James A.; Zanna, Laure
2018-06-01
A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.
Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Synthesizing long-term sea level rise projections - the MAGICC sea level model v2.0
NASA Astrophysics Data System (ADS)
Nauels, Alexander; Meinshausen, Malte; Mengel, Matthias; Lorbacher, Katja; Wigley, Tom M. L.
2017-06-01
Sea level rise (SLR) is one of the major impacts of global warming; it will threaten coastal populations, infrastructure, and ecosystems around the globe in coming centuries. Well-constrained sea level projections are needed to estimate future losses from SLR and benefits of climate protection and adaptation. Process-based models that are designed to resolve the underlying physics of individual sea level drivers form the basis for state-of-the-art sea level projections. However, associated computational costs allow for only a small number of simulations based on selected scenarios that often vary for different sea level components. This approach does not sufficiently support sea level impact science and climate policy analysis, which require a sea level projection methodology that is flexible with regard to the climate scenario yet comprehensive and bound by the physical constraints provided by process-based models. To fill this gap, we present a sea level model that emulates global-mean long-term process-based model projections for all major sea level components. Thermal expansion estimates are calculated with the hemispheric upwelling-diffusion ocean component of the simple carbon-cycle climate model MAGICC, which has been updated and calibrated against CMIP5 ocean temperature profiles and thermal expansion data. Global glacier contributions are estimated based on a parameterization constrained by transient and equilibrium process-based projections. Sea level contribution estimates for Greenland and Antarctic ice sheets are derived from surface mass balance and solid ice discharge parameterizations reproducing current output from ice-sheet models. The land water storage component replicates recent hydrological modeling results. For 2100, we project 0.35 to 0.56 m (66 % range) total SLR based on the RCP2.6 scenario, 0.45 to 0.67 m for RCP4.5, 0.46 to 0.71 m for RCP6.0, and 0.65 to 0.97 m for RCP8.5. These projections lie within the range of the latest IPCC SLR estimates. SLR projections for 2300 yield median responses of 1.02 m for RCP2.6, 1.76 m for RCP4.5, 2.38 m for RCP6.0, and 4.73 m for RCP8.5. The MAGICC sea level model provides a flexible and efficient platform for the analysis of major scenario, model, and climate uncertainties underlying long-term SLR projections. It can be used as a tool to directly investigate the SLR implications of different mitigation pathways and may also serve as input for regional SLR assessments via component-wise sea level pattern scaling.
Description of the NCAR Community Climate Model (CCM3). Technical note
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiehl, J.T.; Hack, J.J.; Bonan, G.B.
This repor presents the details of the governing equations, physical parameterizations, and numerical algorithms defining the version of the NCAR Community Climate Model designated CCM3. The material provides an overview of the major model components, and the way in which they interact as the numerical integration proceeds. This version of the CCM incorporates significant improvements to the physic package, new capabilities such as the incorporation of a slab ocean component, and a number of enhancements to the implementation (e.g., the ability to integrate the model on parallel distributed-memory computational platforms).
Initialization and assimilation of cloud and rainwater in a regional model
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.
1990-01-01
The initialization and assimilation of cloud and rainwater quantities in a mesoscale regional model was examined. Forecasts of explicit cloud and rainwater are made using conservation equations. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. These physical processes, some of which are parameterized, represent source and sink in terms in the conservation equations. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Schimpe, Michael; von Kuepach, Markus Edler
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately.For parameterization, a lifetime test study is conducted including storagemore » and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. The model error for the cell capacity loss in the application-based tests is at the end of testing below 1 % of the original cell capacity.« less
NASA Astrophysics Data System (ADS)
Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.
2014-12-01
A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
NASA Astrophysics Data System (ADS)
Witte, M.; Morrison, H.; Jensen, J. B.; Bansemer, A.; Gettelman, A.
2017-12-01
The spatial covariance of cloud and rain water (or in simpler terms, small and large drops, respectively) is an important quantity for accurate prediction of the accretion rate in bulk microphysical parameterizations that account for subgrid variability using assumed probability density functions (pdfs). Past diagnoses of this covariance from remote sensing, in situ measurements and large eddy simulation output have implicitly assumed that the magnitude of the covariance is insensitive to grain size (i.e. horizontal resolution) and averaging length, but this is not the case because both cloud and rain water exhibit scale invariance across a wide range of scales - from tens of centimeters to tens of kilometers in the case of cloud water, a range that we will show is primarily limited by instrumentation and sampling issues. Since the individual variances systematically vary as a function of spatial scale, it should be expected that the covariance follows a similar relationship. In this study, we quantify the scaling properties of cloud and rain water content and their covariability from high frequency in situ aircraft measurements of marine stratocumulus taken over the southeastern Pacific Ocean aboard the NSF/NCAR C-130 during the VOCALS-REx field experiment of October-November 2008. First we confirm that cloud and rain water scale in distinct manners, indicating that there is a statistically and potentially physically significant difference in the spatial structure of the two fields. Next, we demonstrate that the covariance is a strong function of spatial scale, which implies important caveats regarding the ability of limited-area models with domains smaller than a few tens of kilometers across to accurately reproduce the spatial organization of precipitation. Finally, we present preliminary work on the development of a scale-aware parameterization of cloud-rain water subgrid covariability based in multifractal analysis intended for application in large-scale model microphysics schemes.
Model-driven harmonic parameterization of the cortical surface: HIP-HOP.
Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O
2013-05-01
In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.
NASA Astrophysics Data System (ADS)
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been included in THOR to drive existing microphysical and radiation parameterizations with samples drawn from the trivariate PDF. THOR has been tested in a single-column model framework using standardized test cases spanning a range of large-scale conditions conducive to both shallow cumulus and stratocumulus clouds and the transition between the two states. The results were compared to published LES intercomparison results using the same cases, and the gross characteristics of both cloudiness and boundary layer turbulence produced by THOR were within the range of results from the respective LES ensembles. In addition, THOR was used in a single-column model framework to study low cloud feedbacks in the northeastern Pacific Ocean. Using initialization and forcings developed as part of the CGILS project, THOR was run at 8 points along a cross-section from the trade-wind cumulus region east of Hawaii to the coastal stratocumulus region off the coast of California for both the control climate and a climate perturbed by +2K SST. A neutral to weakly positive cloud feedback of 0-4 W m-2 K-1 was simulated along the cross-section. The physical mechanisms responsible appeared to be increased boundary layer entrainment and stratocumulus decoupling leading to reduced maximum cloud cover and liquid water path.
A basal stress parameterization for modeling landfast ice
NASA Astrophysics Data System (ADS)
Lemieux, Jean-François; Tremblay, L. Bruno; Dupont, Frédéric; Plante, Mathieu; Smith, Gregory C.; Dumont, Dany
2015-04-01
Current large-scale sea ice models represent very crudely or are unable to simulate the formation, maintenance and decay of coastal landfast ice. We present a simple landfast ice parameterization representing the effect of grounded ice keels. This parameterization is based on bathymetry data and the mean ice thickness in a grid cell. It is easy to implement and can be used for two-thickness and multithickness category models. Two free parameters are used to determine the critical thickness required for large ice keels to reach the bottom and to calculate the basal stress associated with the weight of the ridge above hydrostatic balance. A sensitivity study was conducted and demonstrates that the parameter associated with the critical thickness has the largest influence on the simulated landfast ice area. A 6 year (2001-2007) simulation with a 20 km resolution sea ice model was performed. The simulated landfast ice areas for regions off the coast of Siberia and for the Beaufort Sea were calculated and compared with data from the National Ice Center. With optimal parameters, the basal stress parameterization leads to a slightly shorter landfast ice season but overall provides a realistic seasonal cycle of the landfast ice area in the East Siberian, Laptev and Beaufort Seas. However, in the Kara Sea, where ice arches between islands are key to the stability of the landfast ice, the parameterization consistently leads to an underestimation of the landfast area.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Scale dependency of regional climate modeling of current and future climate extremes in Germany
NASA Astrophysics Data System (ADS)
Tölle, Merja H.; Schefczyk, Lukas; Gutjahr, Oliver
2017-11-01
A warmer climate is projected for mid-Europe, with less precipitation in summer, but with intensified extremes of precipitation and near-surface temperature. However, the extent and magnitude of such changes are associated with creditable uncertainty because of the limitations of model resolution and parameterizations. Here, we present the results of convection-permitting regional climate model simulations for Germany integrated with the COSMO-CLM using a horizontal grid spacing of 1.3 km, and additional 4.5- and 7-km simulations with convection parameterized. Of particular interest is how the temperature and precipitation fields and their extremes depend on the horizontal resolution for current and future climate conditions. The spatial variability of precipitation increases with resolution because of more realistic orography and physical parameterizations, but values are overestimated in summer and over mountain ridges in all simulations compared to observations. The spatial variability of temperature is improved at a resolution of 1.3 km, but the results are cold-biased, especially in summer. The increase in resolution from 7/4.5 km to 1.3 km is accompanied by less future warming in summer by 1 ∘C. Modeled future precipitation extremes will be more severe, and temperature extremes will not exclusively increase with higher resolution. Although the differences between the resolutions considered (7/4.5 km and 1.3 km) are small, we find that the differences in the changes in extremes are large. High-resolution simulations require further studies, with effective parameterizations and tunings for different topographic regions. Impact models and assessment studies may benefit from such high-resolution model results, but should account for the impact of model resolution on model processes and climate change.
Ganju, Neil K.; Sherwood, Christopher R.
2010-01-01
A variety of algorithms are available for parameterizing the hydrodynamic bottom roughness associated with grain size, saltation, bedforms, and wave–current interaction in coastal ocean models. These parameterizations give rise to spatially and temporally variable bottom-drag coefficients that ostensibly provide better representations of physical processes than uniform and constant coefficients. However, few studies have been performed to determine whether improved representation of these variable bottom roughness components translates into measurable improvements in model skill. We test the hypothesis that improved representation of variable bottom roughness improves performance with respect to near-bed circulation, bottom stresses, or turbulence dissipation. The inner shelf south of Martha’s Vineyard, Massachusetts, is the site of sorted grain-size features which exhibit sharp alongshore variations in grain size and ripple geometry over gentle bathymetric relief; this area provides a suitable testing ground for roughness parameterizations. We first establish the skill of a nested regional model for currents, waves, stresses, and turbulent quantities using a uniform and constant roughness; we then gauge model skill with various parameterization of roughness, which account for the influence of the wave-boundary layer, grain size, saltation, and rippled bedforms. We find that commonly used representations of ripple-induced roughness, when combined with a wave–current interaction routine, do not significantly improve skill for circulation, and significantly decrease skill with respect to stresses and turbulence dissipation. Ripple orientation with respect to dominant currents and ripple shape may be responsible for complicating a straightforward estimate of the roughness contribution from ripples. In addition, sediment-induced stratification may be responsible for lower stresses than predicted by the wave–current interaction model.
Simulating Ice Dynamics in the Amundsen Sea Sector
NASA Astrophysics Data System (ADS)
Schwans, E.; Parizek, B. R.; Morlighem, M.; Alley, R. B.; Pollard, D.; Walker, R. T.; Lin, P.; St-Laurent, P.; LaBirt, T.; Seroussi, H. L.
2017-12-01
Thwaites and Pine Island Glaciers (TG; PIG) exhibit patterns of dynamic retreat forced from their floating margins, and could act as gateways for destabilization of deep marine basins in the West Antarctic Ice Sheet (WAIS). Poorly constrained basal conditions can cause model predictions to diverge. Thus, there is a need for efficient simulations that account for shearing within the ice column, and include adequate basal sliding and ice-shelf melting parameterizations. To this end, UCI/NASA JPL's Ice Sheet System Model (ISSM) with coupled SSA/higher-order physics is used in the Amundsen Sea Embayment (ASE) to examine threshold behavior of TG and PIG, highlighting areas particularly vulnerable to retreat from oceanic warming and ice-shelf removal. These moving-front experiments will aid in targeting critical areas for additional data collection in ASE as well as for weighting accuracy in further melt parameterization development. Furthermore, a sub-shelf melt parameterization, resulting from Regional Ocean Modeling System (ROMS; St-Laurent et al., 2015) and coupled ISSM-Massachusetts Institute of Technology general circulation model (MITgcm; Seroussi et al., 2017) output, is incorporated and initially tested in ISSM. Data-guided experiments include variable basal conditions and ice hardness, and are also forced with constant modern climate in ISSM, providing valuable insight into i) effects of different basal friction parameterizations on ice dynamics, illustrating the importance of constraining the variable bed character beneath TG and PIG; ii) the impact of including vertical shear in ice flow models of outlet glaciers, confirming its role in capturing complex feedbacks proximal to the grounding zone; and iii) ASE's sensitivity to sub-shelf melt and ice-front retreat, possible thresholds, and how these affect ice-flow evolution.
Thomas, Matthew A.; Mirus, Benjamin B.; Collins, Brian D.; Lu, Ning; Godt, Jonathan W.
2018-01-01
Rainfall-induced shallow landsliding is a persistent hazard to human life and property. Despite the observed connection between infiltration through the unsaturated zone and shallow landslide initiation, there is considerable uncertainty in how estimates of unsaturated soil-water retention properties affect slope stability assessment. This source of uncertainty is critical to evaluating the utility of physics-based hydrologic modeling as a tool for landslide early warning. We employ a numerical model of variably saturated groundwater flow parameterized with an ensemble of texture-, laboratory-, and field-based estimates of soil-water retention properties for an extensively monitored landslide-prone site in the San Francisco Bay Area, CA, USA. Simulations of soil-water content, pore-water pressure, and the resultant factor of safety show considerable variability across and within these different parameter estimation techniques. In particular, we demonstrate that with the same permeability structure imposed across all simulations, the variability in soil-water retention properties strongly influences predictions of positive pore-water pressure coincident with widespread shallow landsliding. We also find that the ensemble of soil-water retention properties imposes an order-of-magnitude and nearly two-fold variability in seasonal and event-scale landslide susceptibility, respectively. Despite the reduced factor of safety uncertainty during wet conditions, parameters that control the dry end of the soil-water retention function markedly impact the ability of a hydrologic model to capture soil-water content dynamics observed in the field. These results suggest that variability in soil-water retention properties should be considered for objective physics-based simulation of landslide early warning criteria.
The effects of ground hydrology on climate sensitivity to solar constant variations
NASA Technical Reports Server (NTRS)
Chou, S. H.; Curran, R. J.; Ohring, G.
1979-01-01
The effects of two different evaporation parameterizations on the climate sensitivity to solar constant variations are investigated by using a zonally averaged climate model. The model is based on a two-level quasi-geostrophic zonally averaged annual mean model. One of the evaporation parameterizations tested is a nonlinear formulation with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface. The other is the linear formulation with the Bowen ratio essentially determined by the prescribed linear coefficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, John
Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
NASA Astrophysics Data System (ADS)
Braun, Jean
2017-04-01
The thickness of the regolith remains one of the most difficult elements of the critical zone to predict or quantify. The regolith hosts a substantial proportion of the world's freshwater reservoir and its shape and physical properties control the hydrology of most river catchments, which is essential to the development and evolution of many eco-systems. The base of the regolith is controlled by the propagation of a weathering front through a range of chemical and physical processes, such as primary mineral dissolution, frost cracking or fracturing helped by topographic stress. We have recently parameterize the evolution of the weathering front under the relatively well accepted assumption that the rate of weathering front propagation, Ḃ, is directly proportional to the velocity of the fluid circulating within the regolith v, i.e. Ḃ = Fv. This approach is justified in most situations where chemical dissolution of highly soluble minerals is thought to dominate the transformation of bedrock into regolith. Under this assumption, the thickness of the regolith reaches a steady-state under the combined effects of weathering front propagation at its base and surface erosion, and the distribution of the regolith is controlled by two dimensionless numbers. The first : Ω = FKS/˙ɛ depends on the surface slope, S, and the steady-state erosion rate, ˙ɛ, through the hydraulic conductivity K and F ; the second: Γ = KS2/P depends on the surface slope and the mean precipitation rate, P . Ω controls the mean thickness of the regolith layer and needs to be larger than unity (i.e. ɛ˙ < FKS) for the regolith layer to exists. We have also shown that Ω is the ratio between the erosional response time of the system LS/ɛ˙ and the weathering response time of the system LF/K implying that where regolith is present at the Earth surface and erosional steady-state, i.e. between uplift and surface erosion, has been reached, the regolith thickness must have reached steady-state as well. On the other hand, Γ controls the shape of the regolith layer and, more precisely, whether it thickens towards the top (Γ > 1) or towards the base (Γ < 1) of topographic features. Our simple parameterization therefore explains why the regolith is thickest on top of hills in tectonically active areas, i.e. where slopes are elevated, and more uniformly distributed or even thickest near base level in tectonically quiescent areas, i.e. in anorogenic areas such as in most continental interiors. These fundamental results have now been expanded to more realistic two-dimensional numerical simulations in which drainage density is dynamically determined by the onset of surface flow, i.e. where the water table intersects the topographic surface. In this way, the length scale of water table connectivity, L, which controls the value of all of the system response times (erosional, weathering and hydraulic) is determined in a self-consistent manner which allows us to predict more accurately the range of responses of the system to tectonic and climatic changes at a variety of forcing periods.
NASA Astrophysics Data System (ADS)
Kalina, E. A.; Biswas, M.; Newman, K.; Grell, E. D.; Bernardet, L.; Frimel, J.; Carson, L.
2017-12-01
The parameterization of moist physics in numerical weather prediction models plays an important role in modulating tropical cyclone structure, intensity, and evolution. The Hurricane Weather Research and Forecast system (HWRF), the National Oceanic and Atmospheric Administration's operational model for tropical cyclone prediction, uses the Scale-Aware Simplified Arakawa-Schubert (SASAS) cumulus scheme and a modified version of the Ferrier-Aligo (FA) microphysics scheme to parameterize moist physics. The FA scheme contains a number of simplifications that allow it to run efficiently in an operational setting, which includes prescribing values for hydrometeor number concentrations (i.e., single-moment microphysics) and advecting the total condensate rather than the individual hydrometeor species. To investigate the impact of these simplifying assumptions on the HWRF forecast, the FA scheme was replaced with the more complex double-moment Thompson microphysics scheme, which individually advects cloud ice, cloud water, rain, snow, and graupel. Retrospective HWRF forecasts of tropical cyclones that occurred in the Atlantic and eastern Pacific ocean basins from 2015-2017 were then simulated and compared to those produced by the operational HWRF configuration. Both traditional model verification metrics (i.e., tropical cyclone track and intensity) and process-oriented metrics (e.g., storm size, precipitation structure, and heating rates from the microphysics scheme) will be presented and compared. The sensitivity of these results to the cumulus scheme used (i.e., the operational SASAS versus the Grell-Freitas scheme) also will be examined. Finally, the merits of replacing the moist physics schemes that are used operationally with the alternatives tested here will be discussed from a standpoint of forecast accuracy versus computational resources.
Albert, A; Mobley, C
2003-11-03
Subsurface remote sensing signals, represented by the irradiance re fl ectance and the remote sensing re fl ectance, were investigated. The present study is based on simulations with the radiative transfer program Hydrolight using optical properties of Lake Constance (German: Bodensee) based on in-situ measurements of the water constituents and the bottom characteristics. Analytical equations are derived for the irradiance re fl ectance and remote sensing re fl ectance for deep and shallow water applications. The input of the parameterization are the inherent optical properties of the water - absorption a(lambda) and backscattering bb(lambda). Additionally, the solar zenith angle thetas, the viewing angle thetav , and the surface wind speed u are considered. For shallow water applications the bottom albedo RB and the bottom depth zB are included into the parameterizations. The result is a complete set of analytical equations for the remote sensing signals R and Rrs in deep and shallow waters with an accuracy better than 4%. In addition, parameterizations of apparent optical properties were derived for the upward and downward diffuse attenuation coefficients Ku and Kd.
Trade-Wind Cloudiness and Climate
NASA Technical Reports Server (NTRS)
Randall, David A.
1997-01-01
Closed Mesoscale Cellular Convection (MCC) consists of mesoscale cloud patches separated by narrow clear regions. Strong radiative cooling occurs at the cloud top. A dry two-dimensional Bousinesq model is used to study the effects of cloud-top cooling on convection. Wide updrafts and narrow downdrafts are used to indicate the asymmetric circulations associated with the mesoscale cloud patches. Based on the numerical results, a conceptual model was constructed to suggest a mechanism for the formation of closed MCC over cool ocean surfaces. A new method to estimate the radioative and evaporative cooling in the entrainment layer of a stratocumulus-topped boundary layer has been developed. The method was applied to a set of Large-Eddy Simulation (LES) results and to a set of tethered-balloon data obtained during FIRE. We developed a statocumulus-capped marine mixed layer model which includes a parameterization of drizzle based on the use of a predicted Cloud Condensation Nuclei (CCN) number concentration. We have developed, implemented, and tested a very elaborate new stratiform cloudiness parameterization for use in GCMs. Finally, we have developed a new, mechanistic parameterization of the effects of cloud-top cooling on the entrainment rate.
Classification of mathematics deficiency using shape and scale analysis of 3D brain structures
NASA Astrophysics Data System (ADS)
Kurtek, Sebastian; Klassen, Eric; Gore, John C.; Ding, Zhaohua; Srivastava, Anuj
2011-03-01
We investigate the use of a recent technique for shape analysis of brain substructures in identifying learning disabilities in third-grade children. This Riemannian technique provides a quantification of differences in shapes of parameterized surfaces, using a distance that is invariant to rigid motions and re-parameterizations. Additionally, it provides an optimal registration across surfaces for improved matching and comparisons. We utilize an efficient gradient based method to obtain the optimal re-parameterizations of surfaces. In this study we consider 20 different substructures in the human brain and correlate the differences in their shapes with abnormalities manifested in deficiency of mathematical skills in 106 subjects. The selection of these structures is motivated in part by the past links between their shapes and cognitive skills, albeit in broader contexts. We have studied the use of both individual substructures and multiple structures jointly for disease classification. Using a leave-one-out nearest neighbor classifier, we obtained a 62.3% classification rate based on the shape of the left hippocampus. The use of multiple structures resulted in an improved classification rate of 71.4%.
Development of the Navy’s Next-Generation Nonhydrostatic Modeling System
2013-09-30
e.g. surface roughness, land- sea mask, surface albedo ) are needed by physical parameterizations. The surface values will be read and interpolated...characteristics (e.g. albedo , surface roughness) is now available to the model during the initialization stage. We have added infrastructure to the...six faces (Fig 3). 4 Figure 3: Topography (top left, in meters), surface roughness (top right, in meters), albedo (bottom left, no units
Vincent J. Pacific; Brian L. McGlynn; Diego A. Riveros-Iregui; Daniel L. Welsch; Howard E. Epstein
2011-01-01
Variability in soil respiration at various spatial and temporal scales has been the focus of much research over the last decade aimed to improve our understanding and parameterization of physical and environmental controls on this flux. However, few studies have assessed the control of landscape position and groundwater table dynamics on the spatiotemporal variability...
NASA Astrophysics Data System (ADS)
Sauerteig, Daniel; Hanselmann, Nina; Arzberger, Arno; Reinshagen, Holger; Ivanov, Svetlozar; Bund, Andreas
2018-02-01
The intercalation and aging induced volume changes of lithium-ion battery electrodes lead to significant mechanical pressure or volume changes on cell and module level. As the correlation between electrochemical and mechanical performance of lithium ion batteries at nano and macro scale requires a comprehensive and multidisciplinary approach, physical modeling accounting for chemical and mechanical phenomena during operation is very useful for the battery design. Since the introduced fully-coupled physical model requires proper parameterization, this work also focuses on identifying appropriate mathematical representation of compressibility as well as the ionic transport in the porous electrodes and the separator. The ionic transport is characterized by electrochemical impedance spectroscopy (EIS) using symmetric pouch cells comprising LiNi1/3Mn1/3Co1/3O2 (NMC) cathode, graphite anode and polyethylene separator. The EIS measurements are carried out at various mechanical loads. The observed decrease of the ionic conductivity reveals a significant transport limitation at high pressures. The experimentally obtained data are applied as input to the electrochemical-mechanical model of a prismatic 10 Ah cell. Our computational approach accounts intercalation induced electrode expansion, stress generation caused by mechanical boundaries, compression of the electrodes and the separator, outer expansion of the cell and finally the influence of the ionic transport within the electrolyte.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Sensitivity of boundary layer variables to PBL schemes over the central Tibetan Plateau
NASA Astrophysics Data System (ADS)
Xu, L.; Liu, H.; Wang, L.; Du, Q.; Liu, Y.
2017-12-01
Planetary Boundary Layer (PBL) parameterization schemes play critical role in numerical weather prediction and research. They describe physical processes associated with the momentum, heat and humidity exchange between land surface and atmosphere. In this study, two non-local (YSU and ACM2) and two local (MYJ and BouLac) planetary boundary layer parameterization schemes in the Weather Research and Forecasting (WRF) model have been tested over the central Tibetan Plateau regarding of their capability to model boundary layer parameters relevant for surface energy exchange. The model performance has been evaluated against measurements from the Third Tibetan Plateau atmospheric scientific experiment (TIPEX-III). Simulated meteorological parameters and turbulence fluxes have been compared with observations through standard statistical measures. Model results show acceptable behavior, but no particular scheme produces best performance for all locations and parameters. All PBL schemes underestimate near surface air temperatures over the Tibetan Plateau. By investigating the surface energy budget components, the results suggest that downward longwave radiation and sensible heat flux are the main factors causing the lower near surface temperature. Because the downward longwave radiation and sensible heat flux are respectively affected by atmosphere moisture and land-atmosphere coupling, improvements in water vapor distribution and land-atmosphere energy exchange is meaningful for better presentation of PBL physical processes over the central Tibetan Plateau.
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.
Physical controls and predictability of stream hyporheic flow evaluated with a multiscale model
Stonedahl, Susa H.; Harvey, Judson W.; Detty, Joel; Aubeneau, Antoine; Packman, Aaron I.
2012-01-01
Improved predictions of hyporheic exchange based on easily measured physical variables are needed to improve assessment of solute transport and reaction processes in watersheds. Here we compare physically based model predictions for an Indiana stream with stream tracer results interpreted using the Transient Storage Model (TSM). We parameterized the physically based, Multiscale Model (MSM) of stream-groundwater interactions with measured stream planform and discharge, stream velocity, streambed hydraulic conductivity and porosity, and topography of the streambed at distinct spatial scales (i.e., ripple, bar, and reach scales). We predicted hyporheic exchange fluxes and hyporheic residence times using the MSM. A Continuous Time Random Walk (CTRW) model was used to convert the MSM output into predictions of in stream solute transport, which we compared with field observations and TSM parameters obtained by fitting solute transport data. MSM simulations indicated that surface-subsurface exchange through smaller topographic features such as ripples was much faster than exchange through larger topographic features such as bars. However, hyporheic exchange varies nonlinearly with groundwater discharge owing to interactions between flows induced at different topographic scales. MSM simulations showed that groundwater discharge significantly decreased both the volume of water entering the subsurface and the time it spent in the subsurface. The MSM also characterized longer timescales of exchange than were observed by the tracer-injection approach. The tracer data, and corresponding TSM fits, were limited by tracer measurement sensitivity and uncertainty in estimates of background tracer concentrations. Our results indicate that rates and patterns of hyporheic exchange are strongly influenced by a continuum of surface-subsurface hydrologic interactions over a wide range of spatial and temporal scales rather than discrete processes.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
NASA Astrophysics Data System (ADS)
Tan, Z.; Schneider, T.; Teixeira, J.; Lam, R.; Pressel, K. G.
2014-12-01
Sub-grid scale (SGS) closures in current climate models are usually decomposed into several largely independent parameterization schemes for different cloud and convective processes, such as boundary layer turbulence, shallow convection, and deep convection. These separate parameterizations usually do not converge as the resolution is increased or as physical limits are taken. This makes it difficult to represent the interactions and smooth transition among different cloud and convective regimes. Here we present an eddy-diffusivity mass-flux (EDMF) closure that represents all sub-grid scale turbulent, convective, and cloud processes in a unified parameterization scheme. The buoyant updrafts and precipitative downdrafts are parameterized with a prognostic multiple-plume mass-flux (MF) scheme. The prognostic term for the mass flux is kept so that the life cycles of convective plumes are better represented. The interaction between updrafts and downdrafts are parameterized with the buoyancy-sorting model. The turbulent mixing outside plumes is represented by eddy diffusion, in which eddy diffusivity (ED) is determined from a turbulent kinetic energy (TKE) calculated from a TKE balance that couples the environment with updrafts and downdrafts. Similarly, tracer variances are decomposed consistently between updrafts, downdrafts and the environment. The closure is internally coupled with a probabilistic cloud scheme and a simple precipitation scheme. We have also developed a relatively simple two-stream radiative scheme that includes the longwave (LW) and shortwave (SW) effects of clouds, and the LW effect of water vapor. We have tested this closure in a single-column model for various regimes spanning stratocumulus, shallow cumulus, and deep convection. The model is also run towards statistical equilibrium with climatologically relevant large-scale forcings. These model tests are validated against large-eddy simulation (LES) with the same forcings. The comparison of results verifies the capacity of this closure to realistically represent different cloud and convective processes. Implementation of the closure in an idealized GCM allows us to study cloud feedbacks to climate change and to study the interactions between clouds, convections, and the large-scale circulation.
Aerosol hygroscopic growth parameterization based on a solute specific coefficient
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.
2011-09-01
Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
Frederix, Gerardus W J; van Hasselt, Johan G C; Schellens, Jan H M; Hövels, Anke M; Raaijmakers, Jan A M; Huitema, Alwin D R; Severens, Johan L
2014-01-01
Structural uncertainty relates to differences in model structure and parameterization. For many published health economic analyses in oncology, substantial differences in model structure exist, leading to differences in analysis outcomes and potentially impacting decision-making processes. The objectives of this analysis were (1) to identify differences in model structure and parameterization for cost-effectiveness analyses (CEAs) comparing tamoxifen and anastrazole for adjuvant breast cancer (ABC) treatment; and (2) to quantify the impact of these differences on analysis outcome metrics. The analysis consisted of four steps: (1) review of the literature for identification of eligible CEAs; (2) definition and implementation of a base model structure, which included the core structural components for all identified CEAs; (3) definition and implementation of changes or additions in the base model structure or parameterization; and (4) quantification of the impact of changes in model structure or parameterizations on the analysis outcome metrics life-years gained (LYG), incremental costs (IC) and the incremental cost-effectiveness ratio (ICER). Eleven CEA analyses comparing anastrazole and tamoxifen as ABC treatment were identified. The base model consisted of the following health states: (1) on treatment; (2) off treatment; (3) local recurrence; (4) metastatic disease; (5) death due to breast cancer; and (6) death due to other causes. The base model estimates of anastrazole versus tamoxifen for the LYG, IC and ICER were 0.263 years, €3,647 and €13,868/LYG, respectively. In the published models that were evaluated, differences in model structure included the addition of different recurrence health states, and associated transition rates were identified. Differences in parameterization were related to the incidences of recurrence, local recurrence to metastatic disease, and metastatic disease to death. The separate impact of these model components on the LYG ranged from 0.207 to 0.356 years, while incremental costs ranged from €3,490 to €3,714 and ICERs ranged from €9,804/LYG to €17,966/LYG. When we re-analyzed the published CEAs in our framework by including their respective model properties, the LYG ranged from 0.207 to 0.383 years, IC ranged from €3,556 to €3,731 and ICERs ranged from €9,683/LYG to €17,570/LYG. Differences in model structure and parameterization lead to substantial differences in analysis outcome metrics. This analysis supports the need for more guidance regarding structural uncertainty and the use of standardized disease-specific models for health economic analyses of adjuvant endocrine breast cancer therapies. The developed approach in the current analysis could potentially serve as a template for further evaluations of structural uncertainty and development of disease-specific models.
NASA Astrophysics Data System (ADS)
Kelly, R. E. J.; Saberi, N.; Li, Q.
2017-12-01
With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
NASA Astrophysics Data System (ADS)
Ying, Zhang; Zhengqiang, Li; Yan, Wang
2014-03-01
Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël
2014-05-20
In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.
NASA Astrophysics Data System (ADS)
Zunino, Andrea; Mosegaard, Klaus
2017-04-01
Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.
NASA Astrophysics Data System (ADS)
Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan
2015-02-01
The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.
NASA Astrophysics Data System (ADS)
Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan
2015-04-01
The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.
Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.
Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong
2017-12-01
Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.
NASA Astrophysics Data System (ADS)
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Pattanayak, Sujata; Mohanty, U. C.; Osuri, Krishna K.
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error. PMID:22701366
NASA Astrophysics Data System (ADS)
Mazoyer, M.; Roehrig, R.; Nuissier, O.; Duffourg, F.; Somot, S.
2017-12-01
Most regional climate models (RCSMs) face difficulties in representing a reasonable pre-cipitation probability density function in the Mediterranean area and especially over land.Small amounts of rain are too frequent, preventing any realistic representation of droughts orheat waves, while the intensity of heavy precipitating events is underestimated and not welllocated by most state-of-the-art RCSMs using parameterized convection (resolution from10 to 50 km). Convective parameterization is a key point for the representation of suchevents and recently, the new physics implemented in the CNRM-RCSM has been shown toremarkably improve it, even at a 50-km scale.The present study seeks to further analyse the representation of heavy precipitating eventsby this new version of CNRM-RCSM using a process oriented approach. We focus on oneparticular event in the south-east of France, over the Cévennes. Two hindcast experimentswith the CNRM-RCSM (12 and 50 km) are performed and compared with a simulationbased on the convection-permitting model Meso-NH, which makes use of a very similarsetup as CNRM-RCSM hindcasts. The role of small-scale features of the regional topogra-phy and its interaction with the impinging large-scale flow in triggering the convective eventare investigated. This study provides guidance in the ongoing implementation and use of aspecific parameterization dedicated to account for subgrid-scale orography in the triggeringand closure conditions of the CNRM-RCSM convection scheme.
NASA Astrophysics Data System (ADS)
Light, B.; Krembs, C.
2003-12-01
Laboratory-based studies of the physical and biological properties of sea ice are an essential link between high latitude field observations and existing numerical models. Such studies promote improved understanding of climatic variability and its impact on sea ice and the structure of ice-dependent marine ecosystems. Controlled laboratory experiments can help identify feedback mechanisms between physical and biological processes and their response to climate fluctuations. Climatically sensitive processes occurring between sea ice and the atmosphere and sea ice and the ocean determine surface radiative energy fluxes and the transfer of nutrients and mass across these boundaries. High temporally and spatially resolved analyses of sea ice under controlled environmental conditions lend insight to the physics that drive these transfer processes. Techniques such as optical probing, thin section photography, and microscopy can be used to conduct experiments on natural sea ice core samples and laboratory-grown ice. Such experiments yield insight on small scale processes from the microscopic to the meter scale and can be powerful interdisciplinary tools for education and model parameterization development. Examples of laboratory investigations by the authors include observation of the response of sea ice microstructure to changes in temperature, assessment of the relationships between ice structure and the partitioning of solar radiation by first-year sea ice covers, observation of pore evolution and interfacial structure, and quantification of the production and impact of microbial metabolic products on the mechanical, optical, and textural characteristics of sea ice.
Spatial regression analysis on 32 years of total column ozone data
NASA Astrophysics Data System (ADS)
Knibbe, J. S.; van der A, R. J.; de Laat, A. T. J.
2014-08-01
Multiple-regression analyses have been performed on 32 years of total ozone column data that was spatially gridded with a 1 × 1.5° resolution. The total ozone data consist of the MSR (Multi Sensor Reanalysis; 1979-2008) and 2 years of assimilated SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) ozone data (2009-2010). The two-dimensionality in this data set allows us to perform the regressions locally and investigate spatial patterns of regression coefficients and their explanatory power. Seasonal dependencies of ozone on regressors are included in the analysis. A new physically oriented model is developed to parameterize stratospheric ozone. Ozone variations on nonseasonal timescales are parameterized by explanatory variables describing the solar cycle, stratospheric aerosols, the quasi-biennial oscillation (QBO), El Niño-Southern Oscillation (ENSO) and stratospheric alternative halogens which are parameterized by the effective equivalent stratospheric chlorine (EESC). For several explanatory variables, seasonally adjusted versions of these explanatory variables are constructed to account for the difference in their effect on ozone throughout the year. To account for seasonal variation in ozone, explanatory variables describing the polar vortex, geopotential height, potential vorticity and average day length are included. Results of this regression model are compared to that of a similar analysis based on a more commonly applied statistically oriented model. The physically oriented model provides spatial patterns in the regression results for each explanatory variable. The EESC has a significant depleting effect on ozone at mid- and high latitudes, the solar cycle affects ozone positively mostly in the Southern Hemisphere, stratospheric aerosols affect ozone negatively at high northern latitudes, the effect of QBO is positive and negative in the tropics and mid- to high latitudes, respectively, and ENSO affects ozone negatively between 30° N and 30° S, particularly over the Pacific. The contribution of explanatory variables describing seasonal ozone variation is generally large at mid- to high latitudes. We observe ozone increases with potential vorticity and day length and ozone decreases with geopotential height and variable ozone effects due to the polar vortex in regions to the north and south of the polar vortices. Recovery of ozone is identified globally. However, recovery rates and uncertainties strongly depend on choices that can be made in defining the explanatory variables. The application of several trend models, each with their own pros and cons, yields a large range of recovery rate estimates. Overall these results suggest that care has to be taken in determining ozone recovery rates, in particular for the Antarctic ozone hole.
Trends and uncertainties in budburst projections of Norway spruce in Northern Europe.
Olsson, Cecilia; Olin, Stefan; Lindström, Johan; Jönsson, Anna Maria
2017-12-01
Budburst is regulated by temperature conditions, and a warming climate is associated with earlier budburst. A range of phenology models has been developed to assess climate change effects, and they tend to produce different results. This is mainly caused by different model representations of tree physiology processes, selection of observational data for model parameterization, and selection of climate model data to generate future projections. In this study, we applied (i) Bayesian inference to estimate model parameter values to address uncertainties associated with selection of observational data, (ii) selection of climate model data representative of a larger dataset, and (iii) ensembles modeling over multiple initial conditions, model classes, model parameterizations, and boundary conditions to generate future projections and uncertainty estimates. The ensemble projection indicated that the budburst of Norway spruce in northern Europe will on average take place 10.2 ± 3.7 days earlier in 2051-2080 than in 1971-2000, given climate conditions corresponding to RCP 8.5. Three provenances were assessed separately (one early and two late), and the projections indicated that the relationship among provenance will remain also in a warmer climate. Structurally complex models were more likely to fail predicting budburst for some combinations of site and year than simple models. However, they contributed to the overall picture of current understanding of climate impacts on tree phenology by capturing additional aspects of temperature response, for example, chilling. Model parameterizations based on single sites were more likely to result in model failure than parameterizations based on multiple sites, highlighting that the model parameterization is sensitive to initial conditions and may not perform well under other climate conditions, whether the change is due to a shift in space or over time. By addressing a range of uncertainties, this study showed that ensemble modeling provides a more robust impact assessment than would a single phenology model run.
NASA Astrophysics Data System (ADS)
Zepka, G. D.; Pinto, O.
2010-12-01
The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.
NASA Astrophysics Data System (ADS)
Liu, X.; Shi, Y.; Wu, M.; Zhang, K.
2017-12-01
Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.
Importance of including ammonium sulfate ((NH4)2SO4) aerosols for ice cloud parameterization in GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharjee, P. S.; Sud, Yogesh C.; Liu, Xiaohong
2010-02-22
A common deficiency of many cloud-physics parameterizations including the NASA’s microphysics of clouds with aerosol- cloud interactions (hereafter called McRAS-AC) is that they simulate less (larger) than the observed ice cloud particle number (size). A single column model (SCM) of McRAS-AC and Global Circulation Model (GCM) physics together with an adiabatic parcel model (APM) for ice-cloud nucleation (IN) of aerosols were used to systematically examine the influence of ammonium sulfate ((NH4)2SO4) aerosols, not included in the present formulations of McRAS-AC. Specifically, the influence of (NH4)2SO4 aerosols on the optical properties of both liquid and ice clouds were analyzed. First anmore » (NH4)2SO4 parameterization was included in the APM to assess its effect vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM-SGP and thirteen other locations (sorted into pristine and polluted conditions) distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH4)2SO4 in the SCM made a remarkable improvement in the simulated effective radius of ice clouds. However, the corresponding ice-cloud optical thickness increased more than is observed. This can be caused by lack of cloud advection and evaporation. We argue that this deficiency can be mitigated by adjusting the other tunable parameters of McRAS-AC such as precipitation efficiency. Inclusion of ice cloud particle splintering introduced through well- established empirical equations is found to further improve the results. Preliminary tests show that these changes make a substantial improvement in simulating the cloud optical properties in the GCM, particularly by simulating a far more realistic cloud distribution over the ITCZ.« less
Application of a planetary wave breaking parameterization to stratospheric circulation statistics
NASA Technical Reports Server (NTRS)
Randel, William J.; Garcia, Rolando R.
1994-01-01
The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.
The parameterization of microchannel-plate-based detection systems
NASA Astrophysics Data System (ADS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.
2016-10-01
The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
NASA Astrophysics Data System (ADS)
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.
NASA Technical Reports Server (NTRS)
Egbert, Gary D.
2001-01-01
A numerical ocean tide model has been developed and tested using highly accurate TOPEX/Poseidon (T/P) tidal solutions. The hydrodynamic model is based on time stepping a finite difference approximation to the non-linear shallow water equations. Two novel features of our implementation are a rigorous treatment of self attraction and loading (SAL), and a physically based parameterization for internal tide (IT) radiation drag. The model was run for a range of grid resolutions, and with variations in model parameters and bathymetry. For a rational treatment of SAL and IT drag, the model run at high resolution (1/12 degree) fits the T/P solutions to within 5 cm RMS in the open ocean. Both the rigorous SAL treatment and the IT drag parameterization are required to obtain solutions of this quality. The sensitivity of the solution to perturbations in bathymetry suggest that the fit to T/P is probably now limited by errors in this critical input. Since the model is not constrained by any data, we can test the effect of dropping sea-level to match estimated bathymetry from the last glacial maximum (LGM). Our results suggest that the 100 m drop in sea-level in the LGM would have significantly increased tidal amplitudes in the North Atlantic, and increased overall tidal dissipation by about 40%. However, details in tidal solutions for the past 20 ka are sensitive to the assumed stratification. IT drag accounts for a significant fraction of dissipation, especially in the LGM when large areas of present day shallow sea were exposed, and this parameter is poorly constrained at present.
Modeling the Surface Temperature of Earth-like Planets
NASA Astrophysics Data System (ADS)
Vladilo, Giovanni; Silva, Laura; Murante, Giuseppe; Filippi, Luca; Provenzale, Antonello
2015-05-01
We introduce a novel Earth-like planet surface temperature model (ESTM) for habitability studies based on the spatial-temporal distribution of planetary surface temperatures. The ESTM adopts a surface energy balance model (EBM) complemented by: radiative-convective atmospheric column calculations, a set of physically based parameterizations of meridional transport, and descriptions of surface and cloud properties more refined than in standard EBMs. The parameterization is valid for rotating terrestrial planets with shallow atmospheres and moderate values of axis obliquity (ɛ ≲ 45{}^\\circ ). Comparison with a 3D model of atmospheric dynamics from the literature shows that the equator-to-pole temperature differences predicted by the two models agree within ≈ 5 K when the rotation rate, insolation, surface pressure and planet radius are varied in the intervals 0.5≲ {Ω }/{{{Ω }}\\oplus }≲ 2, 0.75≲ S/{{S}\\circ }≲ 1.25, 0.3≲ p/(1 bar)≲ 10, and 0.5≲ R/{{R}\\oplus }≲ 2, respectively. The ESTM has an extremely low computational cost and can be used when the planetary parameters are scarcely known (as for most exoplanets) and/or whenever many runs for different parameter configurations are needed. Model simulations of a test-case exoplanet (Kepler-62e) indicate that an uncertainty in surface pressure within the range expected for terrestrial planets may impact the mean temperature by ˜ 60 K. Within the limits of validity of the ESTM, the impact of surface pressure is larger than that predicted by uncertainties in rotation rate, axis obliquity, and ocean fractions. We discuss the possibility of performing a statistical ranking of planetary habitability taking advantage of the flexibility of the ESTM.
NASA Astrophysics Data System (ADS)
Cheng, W. Y.; Kim, D.; Rowe, A.; Park, S.
2017-12-01
Despite the impact of mesoscale convective organization on the properties of convection (e.g., mixing between updrafts and environment), parameterizing the degree of convective organization has only recently been attempted in cumulus parameterization schemes (e.g., Unified Convection Scheme UNICON). Additionally, challenges remain in determining the degree of convective organization from observations and in comparing directly with the organization metrics in model simulations. This study addresses the need to objectively quantify the degree of mesoscale convective organization using high quality S-PolKa radar data from the DYNAMO field campaign. One of the most noticeable aspects of mesoscale convective organization in radar data is the degree of convective clustering, which can be characterized by the number and size distribution of convective echoes and the distance between them. We propose a method of defining contiguous convective echoes (CCEs) using precipitating convective echoes identified by a rain type classification algorithm. Two classification algorithms, Steiner et al. (1995) and Powell et al. (2016), are tested and evaluated against high-resolution WRF simulations to determine which method better represents the degree of convective clustering. Our results suggest that the CCEs based on Powell et al.'s algorithm better represent the dynamical properties of the convective updrafts and thus provide the basis of a metric for convective organization. Furthermore, through a comparison with the observational data, the WRF simulations driven by the DYNAMO large-scale forcing, similarly applied to UNICON Single Column Model simulations, will allow us to evaluate the ability of both WRF and UNICON to simulate convective clustering. This evaluation is based on the physical processes that are explicitly represented in WRF and UNICON, including the mechanisms leading to convective clustering, and the feedback to the convective properties.
NEW EQUATIONS OF STATE IN SIMULATIONS OF CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hempel, M.; Liebendoerfer, M.; Fischer, T.
2012-03-20
We discuss three new equations of state (EOS) in core-collapse supernova simulations. The new EOS are based on the nuclear statistical equilibrium model of Hempel and Schaffner-Bielich (HS), which includes excluded volume effects and relativistic mean-field (RMF) interactions. We consider the RMF parameterizations TM1, TMA, and FSUgold. These EOS are implemented into our spherically symmetric core-collapse supernova model, which is based on general relativistic radiation hydrodynamics and three-flavor Boltzmann neutrino transport. The results obtained for the new EOS are compared with the widely used EOS of H. Shen et al. and Lattimer and Swesty. The systematic comparison shows that themore » model description of inhomogeneous nuclear matter is as important as the parameterization of the nuclear interactions for the supernova dynamics and the neutrino signal. Furthermore, several new aspects of nuclear physics are investigated: the HS EOS contains distributions of nuclei, including nuclear shell effects. The appearance of light nuclei, e.g., deuterium and tritium, is also explored, which can become as abundant as alphas and free protons. In addition, we investigate the black hole formation in failed core-collapse supernovae, which is mainly determined by the high-density EOS. We find that temperature effects lead to a systematically faster collapse for the non-relativistic LS EOS in comparison with the RMF EOS. We deduce a new correlation for the time until black hole formation, which allows the determination of the maximum mass of proto-neutron stars, if the neutrino signal from such a failed supernova would be measured in the future. This would give a constraint for the nuclear EOS at finite entropy, complementary to observations of cold neutron stars.« less
NASA Astrophysics Data System (ADS)
Alipour, Mojtaba; Karimi, Niloofar
2017-06-01
Organic light emitting diodes (OLEDs) based on thermally activated delayed fluorescence (TADF) emitters are an attractive category of materials that have witnessed a booming development in recent years. In the present contribution, we scrutinize the accountability of parameterized and parameter-free single-hybrid (SH) and double-hybrid (DH) functionals through the two formalisms, full time-dependent density functional theory (TD-DFT) and Tamm-Dancoff approximation (TDA), for the estimation of photophysical properties like absorption energy, emission energy, zero-zero transition energy, and singlet-triplet energy splitting of TADF molecules. According to our detailed analyses on the performance of SHs based on TD-DFT and TDA, the TDA-based parameter-free SH functionals, PBE0 and TPSS0, with one-third of exact-like exchange turned out to be the best performers in comparison to other functionals from various rungs to reproduce the experimental data of the benchmarked set. Such affordable SH approximations can thus be employed to predict and design the TADF molecules with low singlet-triplet energy gaps for OLED applications. From another perspective, considering this point that both the nonlocal exchange and correlation are essential for a more reliable description of large charge-transfer excited states, applicability of the functionals incorporating these terms, namely, parameterized and parameter-free DHs, has also been evaluated. Perusing the role of exact-like exchange, perturbative-like correlation, solvent effects, and other related factors, we find that the parameterized functionals B2π-PLYP and B2GP-PLYP and the parameter-free models PBE-CIDH and PBE-QIDH have respectable performance with respect to others. Lastly, besides the recommendation of reliable computational protocols for the purpose, hopefully this study can pave the way toward further developments of other SHs and DHs for theoretical explorations in the field of OLEDs technology.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.
Comparative study of DFT+U functionals for non-collinear magnetism
NASA Astrophysics Data System (ADS)
Ryee, Siheon; Han, Myung Joon
2018-07-01
We performed comparative analysis for DFT+U functionals to better understand their applicability to non-collinear magnetism. Taking LiNiPO4 and Sr2IrO4 as examples, we investigated the results out of two formalisms based on charge-only density and spin density functional plus U calculations. Our results show that the ground state spin order in terms of tilting angle is strongly dependent on Hund J. In particular, the opposite behavior of canting angles as a function of J is found for LiNiPO4. The dependence on the other physical parameters such as Hubbard U and Slater parameterization is investigated. We also discuss the formal aspects of these functional dependences as well as parameter dependences. The current study provides useful information and important intuition for the first-principles calculation of non-collinear magnetic materials.
Jet stream winds - Enhanced aircraft data acquisition and analysis over Southwest Asia
NASA Technical Reports Server (NTRS)
Tenenbaum, J.
1989-01-01
A project is described for providing the accurate initial and verification analyses for the jet stream in regions where general circulation models are known to have large systematic errors, either due to the extreme sparsity of data or to incorrect physical parameterizations. For this purpose, finely spaced aircraft-based meteorological data for the Southwest Asian regions collected for three 10-day periods in the winter of 1988-1989 will be used, together with corresponding data for the North American regions used as a control, to rerun the assimilation cycles and forecast models of the NMC and the NASA Goddard Laboratory for Atmospheres. Data for Southeast Asia will be collected by three carriers with extensive wide-body routes crossing the total region, while data for the North American region will be obtained from the archives of ACARS and GTS.
Dynamics simulation and controller interfacing for legged robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichler, J.A.; Delcomyn, F.
2000-01-01
Dynamics simulation can play a critical role in the engineering of robotic control code, and there exist a variety of strategies both for building physical models and for interacting with these models. This paper presents an approach to dynamics simulation and controller interfacing for legged robots, and contrasts it to existing approaches. The authors describe dynamics algorithms and contact-resolution strategies for multibody articulated mobile robots based on the decoupled tree-structure approach, and present a novel scripting language that provides a unified framework for control-code interfacing, user-interface design, and data analysis. Special emphasis is placed on facilitating the rapid integration ofmore » control algorithms written in a standard object-oriented language (C++), the production of modular, distributed, reusable controllers, and the use of parameterized signal-transmission properties such as delay, sampling rate, and noise.« less
A physiologically based toxicokinetic model for lake trout (Salvelinus namaycush).
Lien, G J; McKim, J M; Hoffman, A D; Jenson, C T
2001-01-01
A physiologically based toxicokinetic (PB-TK) model for fish, incorporating chemical exchange at the gill and accumulation in five tissue compartments, was parameterized and evaluated for lake trout (Salvelinus namaycush). Individual-based model parameterization was used to examine the effect of natural variability in physiological, morphological, and physico-chemical parameters on model predictions. The PB-TK model was used to predict uptake of organic chemicals across the gill and accumulation in blood and tissues in lake trout. To evaluate the accuracy of the model, a total of 13 adult lake trout were exposed to waterborne 1,1,2,2-tetrachloroethane (TCE), pentachloroethane (PCE), and hexachloroethane (HCE), concurrently, for periods of 6, 12, 24 or 48 h. The measured and predicted concentrations of TCE, PCE and HCE in expired water, dorsal aortic blood and tissues were generally within a factor of two, and in most instances much closer. Variability noted in model predictions, based on the individual-based model parameterization used in this study, reproduced variability observed in measured concentrations. The inference is made that parameters influencing variability in measured blood and tissue concentrations of xenobiotics are included and accurately represented in the model. This model contributes to a better understanding of the fundamental processes that regulate the uptake and disposition of xenobiotic chemicals in the lake trout. This information is crucial to developing a better understanding of the dynamic relationships between contaminant exposure and hazard to the lake trout.
A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data sets (or cloud library) stored at Goddard.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2006-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).
NASA Astrophysics Data System (ADS)
Awatey, M. T.; Irving, J.; Oware, E. K.
2016-12-01
Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.
Predicting Decade-to-Century Climate Change: Prospects for Improving Models
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1999-01-01
Recent research has led to a greatly increased understanding of the uncertainties in today's climate models. In attempting to predict the climate of the 21st century, we must confront not only computer limitations on the affordable resolution of global models, but also a lack of physical realism in attempting to model key processes. Until we are able to incorporate adequate treatments of critical elements of the entire biogeophysical climate system, our models will remain subject to these uncertainties, and our scenarios of future climate change, both anthropogenic and natural, will not fully meet the requirements of either policymakers or the public. The areas of most-needed model improvements are thought to include air-sea exchanges, land surface processes, ice and snow physics, hydrologic cycle elements, and especially the role of aerosols and cloud-radiation interactions. Of these areas, cloud-radiation interactions are known to be responsible for much of the inter-model differences in sensitivity to greenhouse gases. Recently, we have diagnostically evaluated several current and proposed model cloud-radiation treatments against extensive field observations. Satellite remote sensing provides an indispensable component of the observational resources. Cloud-radiation parameterizations display a strong sensitivity to vertical resolution, and we find that vertical resolutions typically used in global models are far from convergence. We also find that newly developed advanced parameterization schemes with explicit cloud water budgets and interactive cloud radiative properties are potentially capable of matching observational data closely. However, it is difficult to evaluate the realism of model-produced fields of cloud extinction, cloud emittance, cloud liquid water content and effective cloud droplet radius until high-quality measurements of these quantities become more widely available. Thus, further progress will require a combination of theoretical and modeling research, together with intensified emphasis on both in situ and space-based remote sensing observations.
NASA Astrophysics Data System (ADS)
Kao, C.-Y. J.; Smith, W. S.
1999-05-01
A physically based cloud parameterization package, which includes the Arakawa-Schubert (AS) scheme for subgrid-scale convective clouds and the Sundqvist (SUN) scheme for nonconvective grid-scale layered clouds (hereafter referred to as the SUNAS cloud package), is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, Version 2 (CCM2). The AS scheme is used for a more reasonable heating distribution due to convective clouds and their associated precipitation. The SUN scheme allows for the prognostic computation of cloud water so that the cloud optical properties are more physically determined for shortwave and longwave radiation calculations. In addition, the formation of anvil-like clouds from deep convective systems is able to be simulated with the SUNAS package. A 10-year simulation spanning the period from 1980 to 1989 is conducted, and the effect of the cloud package on the January climate is assessed by comparing it with various available data sets and the National Center for Environmental Protection/NCAR reanalysis. Strengths and deficiencies of both the SUN and AS methods are identified and discussed. The AS scheme improves some aspects of the model dynamics and precipitation, especially with respect to the Pacific North America (PNA) pattern. CCM2's tendency to produce a westward bias of the 500 mbar stationary wave (time-averaged zonal anomalies) in the PNA sector is remedied apparently because of a less "locked-in" heating pattern in the tropics. The additional degree of freedom added by the prognostic calculation of cloud water in the SUN scheme produces interesting results in the modeled cloud and radiation fields compared with data. In general, too little cloud water forms in the tropics, while excessive cloud cover and cloud liquid water are simulated in midlatitudes. This results in a somewhat degraded simulation of the radiation budget. The overall simulated precipitation by the SUNAS package is, however, substantially improved over the original CCM2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanz Rodrigo, Javier; Chávez Arroyo, Roberto Aurelio; Moriarty, Patrick
The increasing size of wind turbines, with rotors already spanning more than 150 m diameter and hub heights above 100 m, requires proper modeling of the atmospheric boundary layer (ABL) from the surface to the free atmosphere. Furthermore, large wind farm arrays create their own boundary layer structure with unique physics. This poses significant challenges to traditional wind engineering models that rely on surface-layer theories and engineering wind farm models to simulate the flow in and around wind farms. However, adopting an ABL approach offers the opportunity to better integrate wind farm design tools and meteorological models. The challenge ismore » how to build the bridge between atmospheric and wind engineering model communities and how to establish a comprehensive evaluation process that identifies relevant physical phenomena for wind energy applications with modeling and experimental requirements. A framework for model verification, validation, and uncertainty quantification is established to guide this process by a systematic evaluation of the modeling system at increasing levels of complexity. In terms of atmospheric physics, 'building the bridge' means developing models for the so-called 'terra incognita,' a term used to designate the turbulent scales that transition from mesoscale to microscale. This range of scales within atmospheric research deals with the transition from parameterized to resolved turbulence and the improvement of surface boundary-layer parameterizations. The coupling of meteorological and wind engineering flow models and the definition of a formal model evaluation methodology, is a strong area of research for the next generation of wind conditions assessment and wind farm and wind turbine design tools. Some fundamental challenges are identified in order to guide future research in this area.« less
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Using Machine learning method to estimate Air Temperature from MODIS over Berlin
NASA Astrophysics Data System (ADS)
Marzban, F.; Preusker, R.; Sodoudi, S.; Taheri, H.; Allahbakhshi, M.
2015-12-01
Land Surface Temperature (LST) is defined as the temperature of the interface between the Earth's surface and its atmosphere and thus it is a critical variable to understand land-atmosphere interactions and a key parameter in meteorological and hydrological studies, which is involved in energy fluxes. Air temperature (Tair) is one of the most important input variables in different spatially distributed hydrological, ecological models. The estimation of near surface air temperature is useful for a wide range of applications. Some applications from traffic or energy management, require Tair data in high spatial and temporal resolution at two meters height above the ground (T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (MODIS). Tair is commonly obtained from synoptic measurements in weather stations. However, the derivation of near surface air temperature from the LST derived from satellite is far from straight forward. T2m is not driven directly by the sun, but indirectly by LST, thus T2m can be parameterized from the LST and other variables such as Albedo, NDVI, Water vapor and etc. Most of the previous studies have focused on estimating T2m based on simple and advanced statistical approaches, Temperature-Vegetation index and energy-balance approaches but the main objective of this research is to explore the relationships between T2m and LST in Berlin by using Artificial intelligence method with the aim of studying key variables to allow us establishing suitable techniques to obtain Tair from satellite Products and ground data. Secondly, an attempt was explored to identify an individual mix of attributes that reveals a particular pattern to better understanding variation of T2m during day and nighttime over the different area of Berlin. For this reason, a three layer Feedforward neural networks is considered with LMA algorithm. Considering the different relationships between T2m and LST for different land types enable us to improve better parameterization for determination of the best non-linear relation between LST and T2m over Berlin during day and nighttime. The results of the study will be presented and discussed.